find a distance between object in binary image - matlab

I have a binary image with two white vertical segments separated by a small gap. I would like to calculate the distance between the two segments. Or better the gap.
My first attempt: find the profile of the two segments (using bwboundary and bwtraceboundary) and then find the intersection between this profile with horizontal line scanning the whole image. The number of lines without intersection represents the distance between the two segments.
I would like to find this gap without detecting the profile. Is there a way?
Thank you.

You can use measuretool from the MATLAB File Exchange by Jan Neggers to retrieve geometrical information of images.

Related

Detecting a line in a JPEG image

I'm new to Swift and image processing but I didn't find any program to do what I wanted. I have thousands of pages of questionnaires but the OMR freeware (Optical Mark Recognition) I use fails to detect the boxes. That is because the questionnaires were printed by me or by the participants in the study yielding to different images (scale and rotation). Redressing the image is not sufficient. Lucky me, there is an horizontal line somewhere on top of each pages. So, the algorithm would look something like this:
Select all the JPEG to transform (done)
Enter the coordinates of the target line (done)
For each JPEG image
3a. Load the image (NSData? not UIImage since it is an App)
3b. Uncompress the image
3c. Detect the line on top of the page
3d. Calculate and apply the angle and the translation (I found a free source in Java doing that)
3e. Save the image under a modified name
I need your help for steps 3a-3b. For step 3c, shall I use Canny edge detector followed by Hough transform?
Any thoughts would be appreciated.
---- EDIT ----
Here is an image describing the problem. On the upper part (Patient #1), the coordinate of the top horizontal line are (294, 242) to (1437, 241). One the lower part (Patient #2), the coordinate of the top horizontal line are (299, 230) to (1439, 230). This seems a small difference but the OMR looks at the ROIs (i.e. boxes) with fixed coordinates. In other scanned images, the difference may be even greater and the top line may be not horizontal (e.g. (X1, Y1) = (320, 235) and (X2, Y2) = (1480, 220)).
My idea is to get a template for the check boxes (the OMR does it) and coordinates of the top line once for ever (I can get them with Paint or whatever). Then align all the images to this template (using their top line) before running the OMR. There may be a scaling, a rotation and a translation needed. In other words, all the images should be perfectly stackable on the template image for the OMR to perform correctly...
--- EDIT Dec 26th ---
I've translated into Swift the Probabilistic Hough Transform of OpenCV (open cpp code from GitHub. Unfortunately, the segments detected are too short (i.e. the entire line segment is not captured). I'm wondering: does it make sense to use Canny Edge Detector before Hough Transform to detect a single segment of a black line on a white page?

Is it possible to bridge more than one pixel in MATLAB?

I have several lines in a binary image. I know the code bridgeBW = bwmorph(closeBW, 'bridge'); will connect the lines if they are close enough, but so far I've only seen it do that in a one pixel range. Is there a way to increase the distance and bridge lines that are farther away?
I ended up using a line based strel method instead of one defined by a shape.

How to extract LBP features from facial images in MATLAB?

I'm not familiar with Local Binary Pattern (LBP), could anyone help me to know how to extract LBP features from facial images (I need a simple code example)?
While searching, I found this code, but I didn't understand it.
So first of all you need to split the face into a certain amount of
sections.
For each of these sections you then have to loop through the all of
the pixels contained within that section and get their value (grey scale or colour values).
For each pixel check the value of the pixels which border it in (diagonals and up down left and right) and save them
for each of the directions check if the colour value of. if the colour is greater than the original pixels value you can assign that value a 1 and if it is less you can assign it as a 0.
you should get a list of 1's and 0's from the previous steps. put these numbers together and it will be a large binary number, you should be able to convert this to decimal and you will have a number assigned for that pixel. save this number per pixel.
after you have got a decimal number for each pixel within a section you can average all of the values to get an average number for this section.
This may not be the best description of how this works so here is a useful picture which might help you.
There is an extractLBPFeatures function in the R2015b release of the Computer Vision System Toolbox for MATLAB.

Upper & Lower profile of a given shape

i would like to know if there is a method to extract the upper and lower profiles of a connected component.
One could first extract the contour and then split it to two sets of pixels, those in top and those in bottom, but i dont know how to decide a set of a given contour-pixel
Thanks in advance.
I believe you are looking for bwboundaries - allowing you to trace the boundary of a binary mask in an image.
Once you traced the boundary of your object you can divide it into "upper" and "lower".

Line detection using PIL

Given an image consisting of black lines (a few pixels wide) on white background, what is a good way to find the coordinates along the the lines, say for every 10th pixel or so? I am considering using PIL for the task, but other python or java-based libraries would also be OK.
Ideally the coordinates would point to the middle of the line, but as the lines are narrow, it's enough that they point somewhere inside the line.
A very short line or a point should be identified with at least one coordinate.
Usually, Hough transformation is used to find lines. It gives you the parameters describing the line (which can be transformed easily between different representations), and you can sample this function to get your sample points. See http://en.wikipedia.org/wiki/Hough_transform and https://stackoverflow.com/questions/tagged/hough-transform+python
I only found this http://coding-experiments.blogspot.co.at/2011/05/ellipse-detection-in-image-by-using.html implementation in python, which actually searches for ellipses.