Irregular Shape Comparison between two inputs - matlab

I'm trying to come up with a scoring system for some behavioural psychology research.
I ask people to draw a letter, then trace over it, both on a graphics tablet. I want to assess the accuracy of this trace. So, you draw any letter ('a'), then you do it again, then I score it based on how similar it was to the first time you drew it. The drawings are stored as pixel locations.
Accuracy is assessed as closeness to the original letter. The method does not need to allow for scale, rotation or position changing. Conceptually it's like the area between the two lines, only the lines are highly irregular, so integrals (to my knowledge) wont work.
I'm writing in MATLAB, but any conceptual help would be appreciated. I've tried summing the minimum distance between all pixels drawn on, but this gives good (low) scores to well placed single points.
This must have been done before, but I'm not having any luck with my searches.
--- Partial Solution using method suggested by #Bill below. Doesn't work, as the bwdist gradient is too steep. Rather than the nice second image Bill shows, it looks more like the original.
%% Letter to image
im = zeros(1080,1920,3); % The screen (possible pixel locations)
% A small square a bit like the letter 'a', a couple of pixels wide.
pixthick = 5;
im(450:450+pixthick,[900:1100],:) = 1;
im(550:550+pixthick,[900:1100],:) = 1;
im([450:550],900:900+pixthick,:) = 1;
im([450:570],1100:1100+pixthick,:) = 1;
subplot(2,1,1); imagesc(im); %% atransbw = bwdist(im(:,:,1)<0.5); subplot(2,1,2);
imagesc(atransbw);

Shape contexts are a powerful feature descriptor based on "polar histograms" of the shapes. The Wikipedia page is in-depth, but here is another page with additional information (and a good visual explanation of the technique), as well as MATLAB demo code. Matching letters was one of the original applications of the method, and the demo code I link to doesn't require you to convert your trace vectors to images.
A more simplistic method might be an "image difference" defined as the exclusive-or of two letters. This would require converting your trace vectors to binary images. Something like:
x = xor(im1,im2);
d = sum(x(:)) / sum(im1(:)); %# normalize to the first image
Finally, if your trace vectors have the same number of points, or can be made to by sampling, Procrustes Analysis could be useful. The idea of Procrustes analysis is to find a least-squares optimum linear transformation (rotation, translation and scaling) between two sets of points. Goodness of fit between the two point sets is given by the "Procrustes statistic" or other measures like root-mean-square deviation of the points.
%# Whatever makes sense;
%# procrustes needs N x 2 matrices with (x,y) coords for N points.
coords1 = [x1 y1];
coords2 = [x2 y2];
%# This sampling may be too naive.
n = max( size(coords1,1), size(coords2,1) );
coords1 = coords1(1:n,:);
coords2 = coords2(1:n,:);
%# d is sum-of-squares error
%# z is transformed coords2
%# tr is the linear transformation
[ d, z, tr ] = procrustes( coords1, coords2 );
%# RMS deviation of points may be better than SSE.
n = size(coords1,1);
rmsd = sqrt((sum((coords1(:) - z(:)).^2) / n));

What could help you is a distance transform, implemented in MATLAB as bwdist. This rewards lines being close, even if they don't match.
a_img_1 = imread('a.jpg');
imagesc(a_img_1);
a_img_1_dist_transform = bwdist( a(:, :, 1) < 250 );
imagesc(a_img_1_dist_transform);
You can do the same with the second image, and sum up the difference in pixel values in the distance transformed images, something like:
score = sum( abs( a_img_1_dist_transform(:) - a_img_2_dist_transform(:) ) )
(Note that this will give higher scores to less similar images and v.v.)
To help prevent issues that you mention of "good (low) scores to well placed single points", you could experiment with other distance measures, such as squared distance between pixel values.

You may want to find an affine transform that will match with some error criterion, say mean squared error. This way you will be invariant to translation and scaling. Or if you wanted to penalize translation, you could add the cost of translation as well. (It would help us help you if you give more information on what kind of features do consider similar or otherwise)
Now, an efficient implementation is another matter. Perhaps you should look into image registration. I'm sure this has been done numerous times.

This is my final, overcomplicated solution, which basically uses Bill Cheatham's method. Thanks for all the help!
% pixLet is the 2D vector contain locations where drawing occurred. First convert it to an image.
im = zeros(1000,1000); % This is the image
for pix = 2:size(pixLet,1)
y1 = pixLet(pix-1,2); x1 = pixLet(pix-1,1);
y2 = pixLet(pix,2); x2 = pixLet(pix,1);
xyd = round(pdist([x1 y1; x2 y2])*2);
xs = round(linspace(x1,x2,xyd));
ys = round(linspace(y1,y2,xyd));
for linepix = 1:length(xs)
im(ys(linepix),xs(linepix)) = 1;
end
end
% Blur the image
blur = fspecial('gaussian',[sz sz],reach);
gausIm = conv2(im,blur,'same');
% I made a function of the above to do this for both the template and the trace.
score = sum(sum(abs(gausIm1-gausIm2)));

I would actually suggest a much more high-level solution. Find an OCR machine learning algorithm that returns some kind of confidence. Or, if you don't have confidence, test the distance between the output text and the actual.
This is like a human that watches the handwriting and attempts to understand it. The higher the confidence, the better the result.

Related

Looking for a tool that extracts data from a plot figure ( here 2D contours from Covariance matrix or Markov chains) and reproduce the original figure

I am looking for an application or a tool which is able for example to extract data from a 2D contour plot like below :
I have seen https://dash-gallery.plotly.host/Portal/ tool or https://plotly.com/dash/ , https://automeris.io/ , but I have test them and this is difficult to extract data (here actually, the data are covariance matrices with ellipses, but I would like to extend it if possible to Markov chains).
If someone could know if there are more efficient tools, mostly from this kind of 2D plot.
I am also opened to commercial applications. I am on MacOS 11.3.
If I am not on the right forum, please let me know it.
UPDATE 1:
I tried to apply the method in Matlab with the script below from this previous post :
%// Import the data:
imdata = importdata('Omega_L_Omega_m.png');
Gray = rgb2gray(imdata.cdata);
colorLim = [-1 1]; %// this should be set manually
%// Get the area of the data:
f = figure('Position',get(0,'ScreenSize'));
imshow(imdata.cdata,'Parent',axes('Parent',f),'InitialMagnification','fit');
%// Get the area of the data:
title('Click with the cross on the most top left area of the *data*')
da_tp_lft = round(getPosition(impoint));
title('Click with the cross on the most bottom right area of the *data*')
da_btm_rgt = round(getPosition(impoint));
dat_area = double(Gray(da_tp_lft(2):da_btm_rgt(2),da_tp_lft(1):da_btm_rgt(1)));
%// Get the area of the colorbar:
title('Click with the cross within the upper most color of the *colorbar*')
ca_tp_lft = round(getPosition(impoint));
title('Click with the cross within the bottom most color of the *colorbar*')
ca_btm_rgt = round(getPosition(impoint));
cmap_area = double(Gray(ca_tp_lft(2):ca_btm_rgt(2),ca_tp_lft(1):ca_btm_rgt(1)));
close(f)
%// Convert the colormap to data:
data = dat_area./max(cmap_area(:)).*range(colorLim)-abs(min(colorLim));
It seems that I get data in data array but I don't know how to exploit it to reproduce the original figure from these data.
Could anyone see how to plot with Matlab this kind of plot with the data I have normally extracted (not sure the Matlab. script has generated all the data for green, orange and blue contours, with each confidence level, that is to say, 68%, 95%, 99.7%) ?
UPDATE 2: I have had first elements of answer on the following link :
partial answer but not fully completed
I cite elements of the approach :
clc
clear all;
imdata = imread('https://www.mathworks.com/matlabcentral/answers/uploaded_files/642495/image.png');
close all;
Gray = rgb2gray(imdata);
yax=sum(conv2(single(Gray),[-1 -1 -1;0 0 0; 1 1 1],'valid'),2);
xax=sum(conv2(single(Gray),[-1 -1 -1;0 0 0; 1 1 1]','valid'),1);
figure(1),subplot(211),plot(xax),subplot(212),plot(yax)
ROIy = find(abs(yax)>1e5);
ROIyinner = find(diff(ROIy)>5);
ROIybounds = ROIy([ROIyinner ROIyinner+1]);
ROIx = find(abs(xax)>1e5);
ROIxinner = find(diff(ROIx)>5);
ROIxbounds = ROIx([ROIxinner ROIxinner+1]);
PLTregion = Gray(ROIybounds(1):ROIybounds(2),ROIxbounds(1):ROIxbounds(2));
PLTregion(PLTregion==255)=nan;
figure(2),imagesc(PLTregion)
[N X]=hist(single(PLTregion(:)),0:255);
figure(3),plot(X,N),set(gca,'yscale','log')
PLTitems = find(N>2000)% %limit "color" of interest to items with >1000 pixels
PLTitems = 1×10
1 67 90 101 129 132 144 167 180 194
PLTvalues = X(PLTitems);
PLTvalues(1)=[]; %ignore black?
%test out region 1
for ind = 1:numel(PLTvalues)
temp = zeros(size(PLTregion));
temp(PLTregion==PLTvalues(ind) | (PLTregion<=50 & PLTregion>10))=255;
% figure(100), imagesc(temp)
temp = bwareaopen(temp,1000);
temp = imfill(temp,'holes');
figure(100), subplot(3,3,ind),imagesc(temp)
figure(101), subplot(3,3,ind),imagesc(single(PLTregion).*temp,[0 255])
end
If someone could know how to improve these first interesting results, this would be fine to mention it.
Restating the problem - My understanding given the different comments and your updates is the following:
someone other than you is in possession of data, which as it happens is 2D data, i.e. an Nx2 matrix;
using the covariance matrix, they are effectively saying something about the joint distribution of these two dimensions, specifically about the variance;
if they assume a Gaussian distribution, as is implied by your comment regarding 68%, 95% and 99.7% for 1sigma, 2sigma and 3sigma, they can draw ellipses which represent the 2D-normal distribution: these are in fact some of the contour lines associated with the 3D "bell" surface;
you have obtained the contour lines in a graph and are trying to obtain the covariance matrix (not the original data...);
you are concerned about the complexity of having to extract the information from each ellipsis.
Partial answer:
It is impossible to recover the original data, I hope you are already aware of that, but in case you are not let's just note that the covariance matrix is a summary statistic of the data, much like the average, and although it says something about the data many different datasets could happen to have the same summary statistic (the same way many different sets of numbers can give you an average of 10).
It is possible to somewhat recover the covariance matrix, i.e. the 3 numbers a, b and c in the matrix [a,b;b,c], though the error in doing so will likely be large because of how imprecise the pixel representation is. Essentially, you will be looking for the dimensions of the two axes, for the variances, as well as the angle of one of the axes, for the covariance.
Unless I am mistaken, under the Gaussian assumption above, you only need to measure this for one of the three ellipses, and then factor by whatever number of sigmas that contour represents. Here you might want to either use the best-defined ellipse, or attempt to use the largest one, which will provide the maximum precision for your measurements (cf. pixelization).
Also, the problem of finding the axes and angle for the ellipse need not be as complex as what it seems like in your first trials: instead of trying to find the contour of the ellipses, find the bounding rectangle.
In order to further simplify this process, if your images are color-coded the way you show, then a filter on blue pixels might be enough in terms of image processing. Then simply take the minimum and maximum (x,y) coordinates in order to obtain the bounding rectangle.
Once the bounding rectangle is obtained, find the equation to your ellipse (that's a question for a math group, but you could start here for example).
Happy filtering!

How to use distance to extract features and compare images: : matlab

I was trying to code for feature extraction from the two images, which are actually similar. I tried to extract the intersection points from both of the image and calculated the distance from one intersection point to all other points. This procedure was iterated for all points and in both images.
Then I compared the distance between points in both images But I found that even for dissimilar images am getting same kind of distance and am not able to distinguish them.
Is there any way in this method which will improve the code or is there any other way to find the similarity.
I = bwmorph(I,'skel',Inf);
II = bwmorph(II,'skel',Inf);
[i,j] = ind2sub(size(I),find(bwmorph(bwmorph(I,'thin',Inf),'branchpoint') == 1));
[i1,j1] = ind2sub(size(II),find(bwmorph(bwmorph(II,'thin',Inf),'branchpoint') == 1));
figure,imshow(I); hold on; plot(j,i,'rx');
figure,imshow(II); hold on; plot(j1,i1,'rx')
m=size(i,1);
n=size(j,1);
m1=size(i1,1);
n1=size(j1,1);
for x=1:m
for y=1:n
d1(y,x)=round(sqrt((i(y,1)-i(x,1)).^2+(j(y,1)-j(x,1)).^2));
end
end
for x1=1:m1
for y1=1:n1
dd1(y1,x1)=round(sqrt((i1(y1,1)-i1(x1,1)).^2+(j1(y1,1)-j1(x1,1)).^2));
end
end
size(d1);
k1=reshape(d1,1,m*n);
k=sort(k1);
k=unique(k);
size(dd1);
k2=reshape(dd1,1,m1*n1);
k2=sort(k2);
k2=unique(k2);
z = intersect(k,k2)
length(z);
if length(z)>20
disp('similar images');
else
disp('dissimilar images');
end
This is a part of my code where I tried to extract features.
input1
input2
skel 1
skel2
I think your code is not the problem. Instead, it seems that either your feature descriptor is not powerful enough or your comparison method is not powerful enough, or a combination of the two. This gives us several options for how to explore solutions to the problem.
Feature Descriptor
You are constructing an image feature consisting of the distances between skeleton intersection points. This is an unusual approach and a very interesting one. It reminds me of peak constellations, a feature used by Shazam to audio-fingerprint songs. If you are interested in exploring, that more sophisticated technique, take a look at "An Industrial Strength Audio Search Algorithm" by Avery Li-Chun Wang. I believe you could adapt their feature descriptor to your application.
However, if you want a simpler solution there are some other options as well. Your current descriptor uses unique to find a set of unique distances between the skeleton intersection points. Take a look at the following images of a line and an equilateral triangle both with 5 unit line lengths. If we use the unique distances between vertices to make the feature, the two images have identical features, but we can also count the number of lines of each length in a histogram.
The histogram preserves more of the image structure as part of the feature. Using a histogram might help distinguish better between your similar and dissimilar cases.
Here's some demo code for histogram features using the Matlab demo images pears.png and peppers.png. I had difficulty extracting the skeleton from your provided images, but you should be able to adapt this code easily to your application.
I1 = = im2bw(imread('peppers.png'));
I2 = = im2bw(imread('pears.png'));
I1_skel = bwmorph(I1,'skel',Inf);
I2_skel = bwmorph(I2,'skel',Inf);
[i1,j1] = ind2sub(size(I1_skel),find(bwmorph(bwmorph(I1_skel,'thin',Inf),'branchpoint') == 1));
[i2,j2] = ind2sub(size(I2_skel),find(bwmorph(bwmorph(I2_skel,'thin',Inf),'branchpoint') == 1));
%You used a for loop to find the distance between each pair of
%intersections. There is a function for this.
d1 = round(pdist2([i1, j1], [i1, j1]));
d2 = round(pdist2([i2, j2], [i2, j2]));
%Choose a number of bins for the histogram.
%This will be the length of the feature.
%More bins will preserve more structure.
%Fewer bins will help generalize between similar but not identical images.
num_bins = 100;
%Instead of using `unique` to remove repetitions use `histcounts` in R2014b
%feature1 = histcounts(d1(:), num_bins);
%feature2 = histcounts(d2(:), num_bins);
%Use `hist` for pre R2014b Matlab versions
feature1 = hist(d1(:), num_bins);
feature2 = hist(d2(:), num_bins);
%Normalize the features
feature1 = feature1 ./ norm(feature1);
feature2 = feature2 ./ norm(feature2);
figure; bar([feature1; feature2]');
title('Features'); legend({'Feature 1', 'Feature 2'});
xlim([0, num_bins]);
Here are what the detected intersection points are in each image
Here are the resulting features. You can see the clear differences between images.
Feature Comparison
The second part to consider is how you compare your features. Currently, you are simply looking for >20 similar distances. With the 'peppers.png' and 'pears.png' test images distributed with Matlab, I find more than 2000 intersection points in one image and 260 in the other. With so many points, it is trivial to have an overlap of >20 similar distances. In your images, the number of intersection points is much smaller. You could carefully adjust the threshold of similar distances, but I think this metric is probably to simplistic.
In Machine Learning, a simple way to compare two feature vectors is vector similarity or distance. There are multiple distance metrics you could explore. Common ones include
Cosine Distance
score_cosine = feature1 * feature2'; %Cosine distance between vectors
%Set a threshold for cosine similarity [0, 1] where 1 is identical and 0 is perpendicular
cosine_threshold = .9;
disp('Cosine Compare')
disp(score_cosine)
if score_cosine > cosine_threshold
disp('similar images');
else
disp('dissimilar images');
end
Euclidean Distance
score_euclidean = pdist2(feature1, feature2);
%Set a threshold for euclidean similarity where smaller is more similar
euclidean_threshold = 0.1;
disp('Euclidean Compare')
disp(score_euclidean)
if score_euclidean < euclidean_threshold
disp('similar images');
else
disp('dissimilar images');
end
If these don't work, you may need to train a classifier to find a more complicated function to distinguish between similar and dissimilar images.

Is this the right way of projecting the training set into the eigespace? MATLAB

I have computed PCA using the following :
function [signals,V] = pca2(data)
[M,N] = size(data);
data = reshape(data, M*N,1);
% subtract off the mean for each dimension
mn = mean(data,2);
data = bsxfun(#minus, data, mean(data,1));
% construct the matrix Y
Y = data'*data / (M*N-1);
[V D] = eigs(Y, 10); % reduce to 10 dimension
% project the original data
signals = data * V;
My question is:
Is "signals" is the projection of the training set into the eigenspace?
I saw in "Amir Hossein" code that "centered image vectors" that is "data" in the above code needs to be projected into the "facespace" by multiplying in the eigenspace basis's. I don't really understand why is the projection done using centered image vectors? Isn't "signals" enough for classification??
By signals, I assume you mean to ask why are we subtracting the mean from raw vector form of image.
If you think about PCA; it is trying to give you best direction where the data varies most. However, as your images contain pixel probably only positive values those pixels will always be on positive which will mislead, especially, your first and most important eigenvector. You can search more about second moment matrix. But I will share a bad paint image that explains it. Sorry about my drawing.
Please ignore the size of stars;
Stars: Your data
Red Line: Eigenvectors;
As you can easily see in 2D, centering the data can give better direction for your principal component. If you skip this step, your first eigenvector will bias on mean and cause poorer results.

Normalizing a histogram and having the y-axis in percentages in matlab

Edit:
Alright, so I answered my own question, by reading older questions a bit more. I apologize for asking the question! Using the code
Y = rand(10,1);
C = hist(Y);
C = C ./ sum(C);
bar(C)
with the corresponding data instead of the random data worked fine. Just need to optimize the bin size now.
Good day,
Now I know that you must be thinking that this has been asked a thousand times. In a way, you are probably right, but I could not find the answer to my specific question from the posts that I found on here, so I figured I might as well just ask. I'll try to be as clear as possible, but please tell me if it is not evident what I want to do
Alright, so I have a (row) vector with 5000 elements, all of which are just integers. Now what I want to do is plot a histogram of these 5000 elements, but in such a way that the y-axis gives the chance of being in that certain bin, while the x-axis is just still regular, as in it gives the value of that specific bin.
Now, what made sense to me was to normalize everything, but that doesn't seem to work, at least how I'm doing it.
My first attempt was
sums = sum(A);
hist(sums/trapz(sums),50)
I omitted the rest because it imports a lot of data from a certain file, which doesn't really matter. sums = sum(A) works fine, and I can see the vector in my matlab thingy. (What should I call it, console?). However, dividing by the area with trapz just changes my x-axis, not my y-axis. Everything gets super small, on the order of 10^-3, while it should be on the order of 10.
Now looking around, someone suggested to use
hist(sums,50)
ylabels = get(gca, 'YTickLabel');
ylabels = linspace(0,1,length(ylabels));
set(gca,'YTickLabel',ylabels);
While this certainly makes the y-axis go from 0 to 1, it is not normalized at all. I want it to actually reflect the chance of being in a certain bin. Combining the two does also not work. I apologize if the answer is very obvious, I just don't see it.
Edit: Although I realize this is a seperate question (that has been asked a million times), but the bin size I just picked by hand until it looked good, as in no bars missing from the histogram. I've seen several different scripts that are supposed to optimize bin size, but none of them seem to make the 'best' looking histogram in every case, sadly :( Is there an easy way to pick the size, if all the numbers are integers?
(Just to close the question)
Histogram is an absolute frequency plot so the sum of all bin frequencies (sum of the output vector of hist function) is always the number of elements in its input vector. So if you want a percentage output all you need to do is dividing each element in the output by that total number:
x = randn(10000, 1);
numOfBins = 100;
[histFreq, histXout] = hist(x, numOfBins);
figure;
bar(histXout, histFreq/sum(histFreq)*100);
xlabel('x');
ylabel('Frequency (percent)');
If you want to reconstruct the probability density function of your data, you need to take into account the bin size of the histogram and divide the frequencies by that:
x = randn(10000, 1);
numOfBins = 100;
[histFreq, histXout] = hist(x, numOfBins);
binWidth = histXout(2)-histXout(1);
figure;
bar(histXout, histFreq/binWidth/sum(histFreq));
xlabel('x');
ylabel('PDF: f(x)');
hold on
% fit a normal dist to check the pdf
PD = fitdist(x, 'normal');
plot(histXout, pdf(PD, histXout), 'r');
Update:
Since MATLAB R2014b, you can use the 'histogram' command to easily produce histograms with various normalizations. For example, the above becomes:
x = randn(10000, 1);
figure;
h = histogram(x, 'normalization', 'pdf');
xlabel('x');
ylabel('PDF: f(x)');
hold on
% fit a normal dist to check the pdf
PD = fitdist(x, 'normal');
plot(h.BinEdges, pdf(PD, h.BinEdges), 'r');

Any idea to find the local minium?

I have many different data sets which are discrete data. The local minimum is not necessary the smallest data but it is the valley around the first peak. I am trying to find the indices of the first valley around the the first peak. My idea is to search the difference between two neighbor points and when the difference is less than some critical value and when the forward point larger than the backward point, then that's the point we wanted. e.g.
for k=PEAK_POS:END_POS
if ( (abs(y(k)-y(k-1))<=0.01) && (y(k-1)>y(k)) )
expected_pos = k;
break;
end
end
this works for some data set but not for all since some dataset might have different sample step so we might change the critical condition but I have so many data set to analyse, I don't think I can analyze each set manually. I am looking for any better way to find that minimum. Thanks.
As mentioned by #JakobS., optimization theory is a large field in Mathematics, with its own journals and conferences and everything.
Your problem does not sound complicated enough to justify the optimization toolbox (correct me if I'm wrong). It sounds like fminsearch is sufficient for your needs. Here's a brief tutorial for it. Type help fminsearch or doc fminsearch for more info.
% example cost function to minimize
z = #(x) sin(x(:,1)).*cos(x(:,2));
% Make a plot
x = -pi:0.01:pi;
y = x;
[x,y] = meshgrid(x,y);
figure(1), surf(x,y, reshape(z([x(:) y(:)]), size(x)), 'edgecolor', 'none')
% find local minimum
[best, fval] = fminsearch(z, [pi,pi])
The result is
best =
1.570819831365890e+00 3.141628097071647e+00
fval =
-9.999999990956473e-01
Which is obviously a very reasonable approximation to the expected local optimum.
Optimization problems are a very broad topic and there has been done a lot already, it's not necessarily a good idea to start coding your own algorithms. For matlab there is the optimization toolbox, which might help: http://www.mathworks.de/products/optimization/
I would use the condition that to the left of a minimum its derivation <0 and to the right >0.
Like in this example:
x = cumsum(rand(1,100)); % nonuniform distance
y = 5*sin(x/10)+randn(size(x)); % example data
dd = diff(y);
ig = [false (dd(1:end-1)<0 & dd(2:end)>0) false]; % patch to correct length
plot(x,y)
line(x(ig),y(ig),'Marker','+','LineStyle','none','Color','r')
and as you stat you'd like the first after a peak:
x_peak = 15;
candidates = x(ig);
i_min=find(candidates>x_peak,1,'first');
candidates(i_min)