Ways to find three curves from palm image - matlab

I am asked to extract three curves from palm images.
original image
expected result.
The curves has lower intensity than neighboring pixels. So I try to morphology to extract the pixels. The matlab code is followed
RGBImage = imread('E:\ImageProcessing\Palm\Matlab\IMG_20140817_144549.jpg');
GrayImage = rgb2gray(RGBImage);
GrayImage = imresize(GrayImage, [768, 1024]);
se = strel('disk', 10);
GrayImage = imbothat(GrayImage, se);
figure; imshow(GrayImage);
The curves still have a little higher intensity values than neighboring pixels. Maybe I need use ridge filter to enhance the curves. I use the filter from matlab file exhange(http://www.mathworks.com/matlabcentral/fileexchange/24409-hessian-based-frangi-vesselness-filter). The filtered result is not satisfactory. I hope to get any advance or links or papers.

This is going to be difficult from the example image you posted, but here is an approach I would suggest that might get you in the right direction:
Use an HSV filter to segment the skin tones from the rest of the image. Now you just have the palm. Mask the rest of the image.
Next, try using an LBP filter to isolate the features you want. The LBP filter is contrast invariant, so changes in lighting levels etc will be irrelevant.
Or, if you wanted a simpler approach, trying using a canny filter, that should probably give you the lines.
Be warned, it's going to be messy.

Related

AI Neural Network Wrong Handwritten Digit Prediction due to inverted color. Octave/Matlab?

my program in Octave uses Neural Networks to recognize handwritten digits. The problem is that it will not recognize the digit correctly if the color is changed. But if color is inverted, it predicts incorrectly. For example:
The Images above contain same number with same pattern. But they have inverted colors.
I am already using RGB to GrayScale Conversions. How to overcome this problem? Is there any better option than using separate training examples for inverted colored images?
Feature Extraction
To generalise #bakkal's suggestion of using edges, one can extract many types of image features. These include edges, corners, blobs, ridges, etc.. There is actually a page on mathworks with a few examples, including number recognition using HOG features (histogram of oriented gradients).
Such techniques should work for more complex images too, because edges are not always the best features. Extracting HOG features from the two of your images using matlab's extractHOGFeatures:
I believe you can use vlfeat for HOG features if you have Octave instead.
Another important thing to keep in mind is that you want all images to have the same size. I have resized both of your images to be 500x500, but this is arbitrary.
The code to generate the image above
close all; clear; clc;
% reading in
img1 = rgb2gray(imread('img1.png'));
img2 = rgb2gray(imread('img2.png'));
img_size = [500 500]; %
% all images should have the same size
img1_resized = imresize(img1, img_size);
img2_resized = imresize(img2, img_size);
% extracting features
[hog1, vis1] = extractHOGFeatures(img1_resized);
[hog2, vis2] = extractHOGFeatures(img2_resized);
% plotting
figure(1);
subplot(1, 2, 1);
plot(vis1);
subplot(1, 2, 2);
plot(vis2);
You do not have to be limited to HOG features. One can also quickly try SURF features
Again, the color inversion does not matter because the features match. But you can see that HOG features are probably a better choice here, because the plotted 20 points/blobs do not really represent number 6 that well.. The code to get the above in matlab.
% extracting SURF features
points1 = detectSURFFeatures(img1_resized);
points2 = detectSURFFeatures(img2_resized);
% plotting SURF Features
figure(2);
subplot(1, 2, 1);
imshow(img1_resized);
hold on;
plot(points1.selectStrongest(20));
hold off;
subplot(1, 2, 2);
imshow(img2_resized);
hold on;
plot(points2.selectStrongest(20));
hold off;
To summarise, depending on the problem, you can choose different types of features. Most of the time choosing raw pixel values is not good enough as you saw from your own experience, unless you have a very large dataset encapsulating all possible cases.
Edge Extraction
If you extract the edges from your images, you'll see that it is largely invariant in that regard, both versions of your image look almost identical after the transformation
Below I show how the image looks when you extract the edges using Laplacian edge detection, for both the "white on black" and "black on white" images:
The idea is to train your network on the edges, to gain some invariance with regard to the variation you described.
Here are some resources for MATLAB/OCtave for edge extraction:
https://mathworks.com/discovery/edge-detection.html
https://octave.sourceforge.io/image/function/edge.html
I've done the edge extraction using Python and OpenCV with edges_image = cv2.Laplacian(original_image, cv2.CV_64F). I may post a MATLAB/Octave sample if I can fix my install :)
Detect dominant color and invert if required
Another way would be to decide that you want to use a version, let's say you've trained the network on "Black text on white background" variant.
Now when you input the image, first detect if the dominant color / background is black or white, then invert if required.

Subpixel edge detection for almost vertical edges

I want to detect edges (with sub-pixel accuracy) in images like the one displayed:
The resolution would be around 600 X 1000.
I came across a comment by Mark Ransom here, which mentions about edge detection algorithms for vertical edges. I haven't come across any yet. Will it be useful in my case (since the edge isn't strictly a straight line)? It will always be a vertical edge though. I want it to be accurate till 1/100th of a pixel at least. I also want to have access to these sub-pixel co-ordinate values.
I have tried "Accurate subpixel edge location" by Agustin Trujillo-Pino. But this does not give me a continuous edge.
Are there any other algorithms available? I will be using MATLAB for this.
I have attached another similar image which the algorithm has to work on:
Any inputs will be appreciated.
Thank you.
Edit:
I was wondering if I could do this:
Apply Canny / Sobel in MATLAB and get the edges of this image (note that it won't be a continuous line). Then, somehow interpolate this Sobel edges and get the co-ordinates in subpixel. Is it possible?
A simple approach would be to project your image vertically and fit the projected profile with an appropriate function.
Here is a try, with an atan shape:
% Load image
Img = double(imread('bQsu5.png'));
% Project
x = 1:size(Img,2);
y = mean(Img,1);
% Fit
f = fit(x', y', 'a+b*atan((x0-x)/w)', 'Startpoint', [150 50 10 150])
% Display
figure
hold on
plot(x, y);
plot(f);
legend('Projected profile', 'atan fit');
And the result:
I get x_0 = 149.6 pix for your first image.
However, I doubt you will be able to achieve a subpixel accuracy of 1/100th of pixel with those images, for several reasons:
As you can see on the profile, your whites are saturated (grey levels at 255). As you cut the real atan profile, the fit is biased. If you have control over the experiments, I suggest you do it again again with a smaller exposure time for instance.
There are not so many points on the transition, so there is not so many information on where the transition is. Typically, your resolution will be the square root of the width of the atan (or whatever shape you prefer). In you case this limits the subpixel resolution at 1/5th of a pixel, at best.
Finally, your edges are not stricly vertical, they are slightly titled. If you choose to use this projection method, to increase the accuracy you should look for a way to correct this tilt before projecting. This won't increase your accuracy by several orders of magnitude, though.
Best,
There is a problem with your image. At pixel level, it seems like there are four interlaced subimages (odd and even rows and columns). Look at this zoomed area close to the edge.
In order to avoid this artifact, I just have taken the even rows and columns of your image, and compute subpixel edges. And finally, I look for the best fitting straight line, using the function clsq whose code is in this page:
%load image
url='http://i.stack.imgur.com/bQsu5.png';
image = imread(url);
imageEvenEven = image(1:2:end,1:2:end);
imshow(imageEvenEven, 'InitialMagnification', 'fit');
% subpixel detection
threshold = 25;
edges = subpixelEdges(imageEvenEven, threshold);
visEdges(edges);
% compute fit line
A = [ones(size(edges.x)) edges.x edges.y];
[c n] = clsq(A,2);
y = [1,200];
x = -(n(2)*y+c) / n(1);
hold on;
plot(x,y,'g');
When executing this code, you can see the green line that best aproximate all the edge points. The line is given by the equation c + n(1)*x + n(2)*y = 0
Take into account that this image has been scaled by 1/2 when taking only even rows and columns, so the right coordinates must be scaled.
Besides, you can try with the other tree subimages (imageEvenOdd, imageOddEven and imageOddOdd) and combine the four straigh lines to obtain the best solution.

Evaluate straightness of an arbitrary countour

I want to get a metric of straightness of contour in my binary image (relatively faster). The image looks as follows:
Now, the contours in the red box are the ones which I would like to be removed preferably. Since they are not straight. These are the things I have tried. I am as of now implementing in MATLAB.
1.Collect row and column coordinates of each contour and then take derivative. For straight objects (such as rectangle), derivative will be mostly low with a few spikes (along the corners of the rectangle).
Problem: The coordinates collected are not in order i.e. the order in which the contour will be traversed if we imaging it as a path. Therefore, derivative gives absurdly high values sometimes. Also, the contour is not absolutely straight, its an output of edge detection algorithm, so you can imagine that there might be some discontinuity (see the rectangle at the bottom, human eye can understand that it is a rectangle though it is not absolutely straight).
2.Tried to think about polyfit, but again this contour issue comes up. Since its a rectangle I don't know how to apply polyfit to that point set.
Also, I would like to remove contours which are distributed vertically/horizontally. Basically this is a lane detection algorithm. So lanes cannot be absolutely vertical/horizontal.
Any ideas?
You should look into the features of regionprops more. To be fair I stole the script from this answer, but here it is:
BW = imread('lanes.png');
BW = im2bw(BW);
figure(1),
subplot(1,2,1);
imshow(BW);
cc = bwconncomp(BW);
l = labelmatrix(cc);
a_rp = regionprops(CC,'Area','MajorAxisLength','MinorAxislength','Orientation','PixelList','Eccentricity');
idx = ([a_rp.Eccentricity] > 0.99 & [a_rp.Area] > 100 & [a_rp.Orientation] < 70 & [a_rp.Orientation] > -90);
BW2 = ismember(l,find(idx));
subplot(1,2,2);
imshow(BW2);
You can mess around with the properties. 'Orientation', 'Eccentricity', and 'Area' are probably the parameters you want to mess with. I also messed with the ratios of the major/minor axis lengths but eccentricity basically does this (eccentricity is a measure of how "circular" an ellipse is). Here's the output:
I actually saw a good video specifically from matlab for lane detection using regionprops. I'll try to see if I can find it and link it.
You can segment your image using bwlabel, then work separately on each bwlabel connected object, using find. This should help solve your order problem.
About a metric, the only thing that come to mind at the moment is to fit to an ellipse, and set the a/b (major axis/minor axis) ratio (basically eccentricity) a parameter. For example a straight line (even if not perfect) will be fitted to an ellipse with a very big major axis and a very small minor axis. So say you set a ratio threshold of >10 etc... Fitting to an ellipse can be done using this FEX submission for example.

Smoothing matlab plot figures

I have obtained the following figure using a 120x120 matrix and the surf function.
Is there a simple way to make the lines between different colors look smoother?
First of all, surf may not be the best way to display 2D-image - if you don't actually need the height information, imagesc will work just fine. Even better, it won't show the differently colored lines between hexagons, since it's not going through the colormap at intersections.
However, regardless of your approach, a low-resolution bitmap will not be automatically transformed into a "arbitrary"-resolution vector graphics - and you may not want that, anyway, if you use the figure to allow you to inspect at which combination of (x,y) you obtained at a given value.
There are three approaches to make your image prettier - (1) segment the hexagons, and use patch to create a vector-graphics image. (2) upsample the image with imresample. (3) create a RGB image and smoothen each color separately to get softer transitions:
%# assume img is your image
nColors = length(unique(img));
%# transform the image to rgb
rgb = ind2rgb((img+fliplr(img)),jet(nColors)); %# there are much better colormaps than jet
%# filter each color
for i=1:3,rgbf(:,:,i)=imfilter(rgb(:,:,i),fspecial('gaussian',9,3),'replicate');end

How to improve image quality in Matlab

I'm building an "Optical Character Recognition" system.
so far the system is capable to identify licence plates in good quality without any noise.
what I want in the next level is to be able to identify licence plates in poor quality beacuse of different reasons.
for example, let's look at the next plate:
as you see, the numbers are not look clearly, because of light returns or something else.
for my question: how can I improve the image quality, so when I move to binary image the numbers will not fade away?
thanks in advance.
We can try to correct for lighting effect by fitting a linear plane over the image intensities, which will approximate the average level across the image. By subtracting this shading plane from the original image, we can attempt to
normalize lighting conditions across the image.
For color RGB images, simply repeat the process on each channel separately,
or even apply it to a different colorspace (HSV, Lab*, etc...)
Here is a sample implementation:
function img = correctLighting(img, method)
if nargin<2, method='rgb'; end
switch lower(method)
case 'rgb'
%# process R,G,B channels separately
for i=1:size(img,3)
img(:,:,i) = LinearShading( img(:,:,i) );
end
case 'hsv'
%# process intensity component of HSV, then convert back to RGB
HSV = rgb2hsv(img);
HSV(:,:,3) = LinearShading( HSV(:,:,3) );
img = hsv2rgb(HSV);
case 'lab'
%# process luminosity layer of L*a*b*, then convert back to RGB
LAB = applycform(img, makecform('srgb2lab'));
LAB(:,:,1) = LinearShading( LAB(:,:,1) ./ 100 ) * 100;
img = applycform(LAB, makecform('lab2srgb'));
end
end
function I = LinearShading(I)
%# create X-/Y-coordinate values for each pixel
[h,w] = size(I);
[X Y] = meshgrid(1:w,1:h);
%# fit a linear plane over 3D points [X Y Z], Z is the pixel intensities
coeff = [X(:) Y(:) ones(w*h,1)] \ I(:);
%# compute shading plane
shading = coeff(1).*X + coeff(2).*Y + coeff(3);
%# subtract shading from image
I = I - shading;
%# normalize to the entire [0,1] range
I = ( I - min(I(:)) ) ./ range(I(:));
end
Now lets test it on the given image:
img = im2double( imread('http://i.stack.imgur.com/JmHKJ.jpg') );
subplot(411), imshow(img)
subplot(412), imshow( correctLighting(img,'rgb') )
subplot(413), imshow( correctLighting(img,'hsv') )
subplot(414), imshow( correctLighting(img,'lab') )
The difference is subtle, but it might improve the results of further image processing and OCR task.
EDIT: Here is some results I obtained by applying other contrast-enhancement techniques IMADJUST, HISTEQ, ADAPTHISTEQ on the different colorspaces in the same manner as above:
Remember you have to fine-tune any parameter to fit your image...
It looks like your question has been more or less answered already (see d00b's comment); however, here are a few basic image processing tips that might help you here.
First, you could try a simple imadjust. This simply maps the pixel intensities to a "better" value, which often increases the contrast (making it easier to view/read). I have had a lot of success with it in my work. It is easy to use too! I think its worth a shot.
Also, this looks promising if you simply want a higher resolution image.
Enjoy the "pleasure" of image-processing in MATLAB!
Good luck,
tylerthemiler
P.S. If you are flattening the image to binary tho, you are most likely ruining the image to start with, so don't do that if you can avoid it!
As you only want to find digits (of which there are only 10), you can use cross-correlation.
For this you would Fourier transform the picture of the plate. You also Fourier transform a pattern you want to match a good representation of a picture of the digit 1. Then you multiply in fourier space and inversely Fourier transform the result.
In the final cross-correlation, you will see pronounced peaks, where the pattern overlaps nicely with your image.
You do this 10 times and know where each digit is. Note that you must correct the tilt before you do the cross correlation.
This method has the advantage that you don't have to threshold your image.
There are certainly much more sophisticated algorithms in the literature for assigning number plates. One could for example use Bayes theory to estimate which digit would most likely occur (this helps a lot if you already have a databases of possible numbers).