How to statistically test the accuracy of 3D models? - matlab

I have built a 3D model from a 2D image. I want to know how much accurate my model is using some statistical test. I think there are many available methods to do this like correlation and mean squares as mentioned in this question, Is it possible to compare 3D images?.
I couldn't find a clear description of the available tests in other sites. I've found an implementation which compares 2D images using square means here, http://www.mathworks.com/matlabcentral/answers/81048-mse-mean-square-error. I'm not sure if this can be used to calculate the model accuracy. Also, I didn't find an explanation of how the tests work, i.e. what parameters are compared (color, intensity, etc.) ?
EDIT: For more clarity, the 3D model represents every pixel in the 2D image as a voxel which has a color associated with it. The purpose of this model is to reconstruct the different color regions found in the 2D image into the 3D representation. So, the number of pixels that has some color (they represent a region) is calculated from the 2D image. A similar number of voxels will be constructed in the 3D model and given the same color. What matters in this modeling problem is the following,
1- size of the regions (must be almost similar in the 2D image and the model).
2-Connectivity level of a region in the 2D image and its corresponding region constructed in the 3D image must be similar. By connectivity I mean to check if the region components are scattered through the image or they are connected forming one large connected region instead of many small scattered regions of the same color.
EDIT2: I think color correlogram is suitable. I have found a code that implements it, but it is not clear to me. Here is the code,
% Soumyabrata Dev
% E-mail: soumyabr001#e.ntu.edu.sg
% http://www3.ntu.edu.sg/home2012/soumyabr001/
I= imread ('img.jpg');
correlogram_vector=[];
[Y,X]=size(rgb2gray(I));
% quantize image into 64 colors = 4x4x4, in RGB space
[img_no_dither, ~] = rgb2ind(I, 64, 'nodither');
% figure, imshow(img_no_dither, map);
%rgb = ind2rgb(img_no_dither, map); % rgb = double(rgb)
distance_vector= [1 3];
[~,d]=size(distance_vector);
count_matrix=zeros(64,d); total_matrix=zeros(64,d);
prob_dist=cell(1,d);
for serial_no=1:1:d
for x=1:X
for y=1:Y
color=img_no_dither(y,x);
% At the given distance
[positive_count,total_count]=get_n(distance_vector(serial_no),x,y,color,img_no_dither,X,Y);
count_matrix(color+1,serial_no)=count_matrix(color+1,serial_no)+positive_count;
total_matrix(color+1,serial_no)=total_matrix(color+1,serial_no)+total_count;
end
end
prob_dist{serial_no}=count_matrix(:,serial_no)./(1+total_matrix(:,serial_no));
end
for serial_no=1:d
correlogram_vector=cat(1,correlogram_vector,prob_dist{serial_no});
end
end
This is the method get_n,
function [positive_count,total_count]=get_n(n,x,y,color,img_no_dither,X,Y)
% This function is useful to get the validity map of the neighborhood case.
% It can handle any number of neighborhood distances.
% Input
% n=The order of the neighborhood
% x & y= x y co-ordinates of the given pixel
% color= particular quantized color
% img_no_dither= The color quantized image matrix
% X & Y= The original dimensions of the input image
% Output
% positive_count= The number of occurences which have the same color
% total_count= The total number of valid cases for this particular instant
valid_vector8n=zeros(1,8*n); % This is because of the propoerty of inf-norm. Each distance has 8 times the order
positive_count=0; total_count=0;
nbrs_x=zeros(1,8*n); nbrs_y=zeros(1,8*n);
% The counting of the pixels is done in the following manner: From the
% given pixel, go left-->up-->right-->down-->left-->up
% Y co-ordinates of nbrs
nbrs_y(1)=y;
d=1;
for k=2:1+n
nbrs_y(k)=y-d;
d=d+1;
end
nbrs_y(1+n:1:3*n+1)=y-n;
d=0;
for k=3*n+1:5*n+1
nbrs_y(k)=y-n+d;
d=d+1;
end
nbrs_y(5*n+1:1:7*n+1)=y+n;
d=0;
for k=7*n+1:1:7*n+1+(n-1)
nbrs_y(k)=y+n-d;
d=d+1;
end
% X co-ordinates of nbrs
nbrs_x(1)=x-n;
nbrs_x(2:1:1+n)=x-n;
d=0;
for k=1+n:1:3*n+1
nbrs_x(k)=x-n+d;
d=d+1;
end
nbrs_x(3*n+1:5*n+1)=x+n;
d=0;
for k=5*n+1:7*n+1
nbrs_x(k)=x+n-d;
d=d+1;
end
nbrs_x(7*n+1:7*n+1+(n-1))=x-n;
% Assigning the validity of the neighborhood
for i=1:8*n
if nbrs_x(i)>0 && nbrs_x(i)<=X && nbrs_y(i)>0 && nbrs_y(i)<=Y
valid_vector8n(i)=1;
else
valid_vector8n(i)=0;
end
end
% Couting the number of common colors in the valid areas of the
% neighborhood.
for j=1:8*n
if valid_vector8n(j)==1
data= img_no_dither(nbrs_y(j),nbrs_x(j));
if (data==color)
positive_count=positive_count+1;
end
total_count=total_count+1;
end
end
end
Can anyone please clarify how this code works?
The code above is for autocorrelogram not correlogram. I've read that the first is better, but it can only calculate the spatial probability using pixels of the same colors (can't be applied on pairs of pixels which have different colors). Is this right?
Thank You.

TLDR: Classical workflow:
find matching features in both models,
calculate the distance,
??????,
PROFIT!!

Related

plot polar grey values in matrix without interpolating every for loop

I have a matrix with grey values between 0 and 1. For every entry in the matrix, there are certain polar coordinates that indicate the position of the grey values. I already have either Theta and Rho values (polar) ,both in separate 512×960 matrices. And grayscale values (in a matrix called C) for every Theta and Rho combination. I have the same for X and Y, as I just use pol2cart for the transformation. The problem is that I cannot directly plot these values, as they do not yet fit in the 'bins' of the new matrix.
What I want: to put the grey values in a square matrix of size 1024×1024. I cannot do this directly, because the polar coordinates fall in between the grid of this matrix. Therefore, we now use interpolation, but this is extremely time consuming and has to be done separately for every dataset, although the transformation from the original matrices to this final one will always be the same. Therefore, I'd like to solve this matrix once (either analytically or numerically) and use a matrix multiplication or something similar to apply the manipulation efficiently in every cycle of the code.
One example of what one of these transformations could look like this:
The zeros in the first matrix are the grid, and the value 1 (in between the grid) is the grey value that falls in between four grid points, then I'd like to transform to the second matrix (don't mind the visual spacing between the points).
For every dataset, I have hundreds of these matrices, so I would like to make the code more efficient.
Background: I'm using TriScatteredInterp now for the interpolation. We tried scatteredInterpolant as well, but that is slower. I also posted a related question, but decided to split the two possible solutions, because the solution I ask for here is also applicable to non-MATLAB code and will probably be faster and makes for a smoother (no continuous popping up of figures) execution of the code.
Using the image processing toolbox
Images work a bit differently than the data you have. However, it's fairly straightforward to map one representation into the other.
There is only one problem I see: wrapping. Obviously, θ = 2π = 0, but MATLAB does not know that. AFAIK, there is no easy way to tell MATLAB that.
Why does this matter? Well, simply put, inter-pixel interpolation uses information from the nearest N neighbors to find intermediate colors, with N depending on the interpolation kernel. When doing this somewhere in the middle of the image there is no problem, but at the edges MATLAB has to know that the left edge equals the right edge. This is not standard image processing, and I'm not aware of any function that is capable of this.
Implementation
Now, when disregarding the wrapping problem, this is one way to do it:
function resize_polar()
%% ORIGINAL IMAGE
% ==========================================================================
% Some random greyscale data
C = double(rgb2gray(imread('stars.png')))/255;
% Your current size, and desired size
sz_x = size(C,2); new_sz_x = 1024;
sz_y = size(C,1); new_sz_y = 1024;
% Ranges for teat and rho;
% replace with your actual values
rho_start = 0; theta_start = 0;
rho_end = 10; theta_end = 2*pi;
% Generate regularly spaced grid;
theta = linspace(theta_start, theta_end, sz_x);
rho = linspace(rho_start, rho_end, sz_y);
[theta, rho] = meshgrid(theta,rho);
% Make plot of generated data
plot_polar(theta, rho, C, 'Original image');
% Resize data
[theta,rho,C] = resize_polar_data(theta, rho, C, [new_sz_y new_sz_x]);
% Make plot of generated data
plot_polar(theta, rho, C, 'Rescaled image');
end
function [theta,rho,data] = resize_polar_data(theta,rho,data, new_dims)
% Create fake RGB image cube
IMG = cat(3, theta,rho,data);
% Rescale as if theta and rho are RG color data in the RGB
% image cube
IMG = imresize(IMG, new_dims, 'nearest');
% Split up the data again
theta = IMG(:,:,1);
rho = IMG(:,:,2);
data = IMG(:,:,3);
end
function plot_polar(theta, rho, data, label)
[X,Y] = pol2cart(theta, rho);
figure('renderer', 'opengl')
clf, hold on
surf(X,Y,zeros(size(X)), data, ...
'edgecolor', 'none');
colormap gray
title(label);
end
The images used and plotted:
Le awesomely-drawn 512×960 PNG image
Now, the two look the same (couldn't really come up with a better-suited image), so you'll have to believe me that the 512×960 has indeed been rescaled to 1024×1024, with nearest-neighbor interpolation.
Here are some timings for the actual imresize() operation for some simple kernels:
nearest : 0.008511 seconds.
bilinear: 0.019651 seconds.
bicubic : 0.025390 seconds. <-- default kernel
But this depends strongly on your hardware; I believe imresize offloads a lot of work to the GPU, so if you have a crappy one, it'll be slower.
Wrapping
If the wrapping problem is really important to you, you can modify the function above to do the following:
first, rescale the image with imresize() like before
horizontally concatenate the second half of the grayscale data and the first half. Meaning, you swap the first and second halves to make the left and right edges (0 and 2π) touch in the middle.
rescale this intermediate image with imresize()
Extract the central vertical strip of the rescaled intermediate image
split that up in two equal-width strips
and replace the edge strips of the output image with the two strips you just created
Now, this is kind of a brute force approach: you are re-scaling an image twice, and most of the pixels of the second image round will be discarded. If performance is a problem, you can of course apply the rescale to only the central strip of that intermediate image. But, well, that will be a bit more complicated.

Expected number of points in a cubic volume given a GMM

I have a 3D space which I discretized in voxels (cubes of volume). I also have a set of 3D points in such space. I want to know the expected number of points in a given voxel. I chose GMM as a model for this purpose, but I do not know how to calculate what I want starting from mu, sigma and weight of each Gaussian.
So far I managed to fit the GMM (easy):
obj = gmdistribution.fit(points', 20);
and I to plot it via
figure(1);
hold on;
for i = 1:k
plot_gaussian_ellipsoid(obj.mu(i,:), obj.Sigma(:,:,i));
end
axis equal;
which results in what I expect, that is a map where the colors tell me the concentration of points.
The question is, how can I extract the expected number of points in a voxel, given its center (x,y,z) and its side s?
You can use (see example here http://www.mathworks.nl/help/stats/gmdistribution.cluster.html)
idx = cluster(gm,points);

MATLAB Image Processing - Find Edge and Area of Image

As a preface: this is my first question - I've tried my best to make it as clear as possible, but I apologise if it doesn't meet the required standards.
As part of a summer project, I am taking time-lapse images of an internal melt figure growing inside a crystal of ice. For each of these images I would like to measure the perimeter of, and area enclosed by the figure formed. Linked below is an example of one of my images:
The method that I'm trying to use is the following:
Load image, crop, and convert to grayscale
Process to reduce noise
Find edge/perimeter
Attempt to join edges
Fill perimeter with white
Measure Area and Perimeter using regionprops
This is the code that I am using:
clear; close all;
% load image and convert to grayscale
tyrgb = imread('TyndallTest.jpg');
ty = rgb2gray(tyrgb);
figure; imshow(ty)
% apply a weiner filter to remove noise.
% N is a measure of the window size for detecting coherent features
N=20;
tywf = wiener2(ty,[N,N]);
tywf = tywf(N:end-N,N:end-N);
% rescale the image adaptively to enhance contrast without enhancing noise
tywfb = adapthisteq(tywf);
% apply a canny edge detection
tyedb = edge(tywfb,'canny');
%join edges
diskEnt1 = strel('disk',8); % radius of 4
tyjoin1 = imclose(tyedb,diskEnt1);
figure; imshow(tyjoin1)
It is at this stage that I am struggling. The edges do not quite join, no matter how much I play around with the morphological structuring element. Perhaps there is a better way to complete the edges? Linked is an example of the figure this code outputs:
The reason that I am trying to join the edges is so that I can fill the perimeter with white pixels and then use regionprops to output the area. I have tried using the imfill command, but cannot seem to fill the outline as there are a large number of dark regions to be filled within the perimeter.
Is there a better way to get the area of one of these melt figures that is more appropriate in this case?
As background research: I can make this method work for a simple image consisting of a black circle on a white background using the below code. However I don't know how edit it to handle more complex images with edges that are less well defined.
clear all
close all
clc
%% Read in RGB image from directory
RGB1 = imread('1.jpg') ;
%% Convert RPG image to grayscale image
I1 = rgb2gray(RGB1) ;
%% Transform Image
%CROP
IC1 = imcrop(I1,[74 43 278 285]);
%BINARY IMAGE
BW1 = im2bw(IC1); %Convert to binary image so the boundary can be traced
%FIND PERIMETER
BWP1 = bwperim(BW1);
%Traces perimeters of objects & colours them white (1).
%Sets all other pixels to black (0)
%Doing the same job as an edge detection algorithm?
%FILL PERIMETER WITH WHITE IN ORDER TO MEASURE AREA AND PERIMETER
BWF1 = imfill(BWP1); %This opens figure and allows you to select the areas to fill with white.
%MEASURE PERIMETER
D1 = regionprops(BWF1, 'area', 'perimeter');
%Returns an array containing the properties area and perimeter.
%D1(1) returns the perimeter of the box and an area value identical to that
%perimeter? The box must be bounded by a perimeter.
%D1(2) returns the perimeter and area of the section filled in BWF1
%% Display Area and Perimeter data
D1(2)
I think you might have room to improve the effect of edge detection in addition to the morphological transformations, for instance the following resulted in what appeared to me a relatively satisfactory perimeter.
tyedb = edge(tywfb,'sobel',0.012);
%join edges
diskEnt1 = strel('disk',7); % radius of 4
tyjoin1 = imclose(tyedb,diskEnt1);
In addition I used bwfill interactively to fill in most of the interior. It should be possible to fill the interior programatically but I did not pursue this.
% interactively fill internal regions
[ny nx] = size(tyjoin1);
figure; imshow(tyjoin1)
tyjoin2=tyjoin1;
titl = sprintf('click on a region to fill\nclick outside window to stop...')
while 1
pts=ginput(1)
tyjoin2 = bwfill(tyjoin2,pts(1,1),pts(1,2),8);
imshow(tyjoin2)
title(titl)
if (pts(1,1)<1 | pts(1,1)>nx | pts(1,2)<1 | pts(1,2)>ny), break, end
end
This was the result I obtained
The "fractal" properties of the perimeter may be of importance to you however. Perhaps you want to retain the folds in your shape.
You might want to consider Active Contours. This will give you a continous boundary of the object rather than patchy edges.
Below are links to
A book:
http://www.amazon.co.uk/Active-Contours-Application-Techniques-Statistics/dp/1447115570/ref=sr_1_fkmr2_1?ie=UTF8&qid=1377248739&sr=8-1-fkmr2&keywords=Active+shape+models+Andrew+Blake%2C+Michael+Isard
A demo:
http://users.ecs.soton.ac.uk/msn/book/new_demo/Snakes/
and some Matlab code on the File Exchange:
http://www.mathworks.co.uk/matlabcentral/fileexchange/28149-snake-active-contour
and a link to a description on how to implement it: http://www.cb.uu.se/~cris/blog/index.php/archives/217
Using the implementation on the File Exchange, you can get something like this:
%% Load the image
% You could use the segmented image obtained previously
% and then apply the snake on that (although I use the original image).
% This will probably make the snake work better and the edges
% in your image is not that well defined.
% Make sure the original and the segmented image
% have the same size. They don't at the moment
I = imread('33kew0g.jpg');
% Convert the image to double data type
I = im2double(I);
% Show the image and select some points with the mouse (at least 4)
% figure, imshow(I); [y,x] = getpts;
% I have pre-selected the coordinates already
x = [ 525.8445 473.3837 413.4284 318.9989 212.5783 140.6320 62.6902 32.7125 55.1957 98.6633 164.6141 217.0749 317.5000 428.4172 494.3680 527.3434 561.8177 545.3300];
y = [ 435.9251 510.8691 570.8244 561.8311 570.8244 554.3367 476.3949 390.9586 311.5179 190.1085 113.6655 91.1823 98.6767 106.1711 142.1443 218.5872 296.5291 375.9698];
% Make an array with the selected coordinates
P=[x(:) y(:)];
%% Start Snake Process
% You probably have to fiddle with the parameters
% a bit more that I have
Options=struct;
Options.Verbose=true;
Options.Iterations=1000;
Options.Delta = 0.02;
Options.Alpha = 0.5;
Options.Beta = 0.2;
figure(1);
[O,J]=Snake2D(I,P,Options);
If the end result is an area/diameter estimate, then why not try to find maximal and minimal shapes that fit in the outline and then use the shapes' area to estimate the total area. For instance, compute a minimal circle around the edge set then a maximal circle inside the edges. Then you could use these to estimate diameter and area of the actual shape.
The advantage is that your bounding shapes can be fit in a way that minimizes error (unbounded edges) while optimizing size either up or down for the inner and outer shape, respectively.

Find Overlapping Region between 3-Dimensional Shapes

I'm currently plotting 2 separate 3-dimensional amorphous blobs which overlap each other. I have created the blobs by deforming a unit circle (as you can see in the code provided below). My question is: is there an easy way to isolate the overlapping region? I need to isolate the overlapping region and then color it differently (as in turn the region green, for example) to clearly show where the overlap is. My actual program has many shapes that overlap, however for the sake of simplicity, i have produced the following code to illustrate what i am trying to do:
% Create Sphere with 100 points
N = 100; % sphere grid points
[X,Y,Z] = sphere(N); % get x,y,z coordinates for sphere
num=size(X,1)*size(X,2); % get total amount of x-coordinates (m*n)
% Loop through every x-coordinate and apply scaling if applicable
for k=1:num % loop through every coordinate
value=X(k); % store original value of X(k) as value
if value<0 % compare value to 0
X(k)=0.3*value; % if < 0, scale value
end
end
% Loop through every z-coordinate and apply scaling if applicable
for k=1:num % loop through every coordinate
value=Z(k); % store original value of X(k) as value
if value>0 % compare value to 0
Z(k)=0.3*value; % if < 0, scale value
end
end
mesh(X,Y,Z,'facecolor','y','edgecolor','y','facealpha',...
0.2,'edgealpha',0.2);
hold on
mesh(-1*(X-1),Y,Z,'facecolor','r','edgecolor','r','facealpha',...
0.2,'edgealpha',0.2);
hold off
axis equal
I'm not necessarliy looking for code, just an effective algorithm or process to achieve the desired results as I need to adapt this result into the more sophisticated program I have.
Maintain an array (n-dimensional) of integers, as you draw your objects, increment each corresponding point in the array. When done, loop through the array, and each element > 1 has an overlap between two or more objects, use the array coordinate to color the objects based on the number of overlaps.
I have worked in 2D with the MATLAB builtin function inpolygon to find out overlapping areas. However, it does not natively support 3d. I would suggest you try the inhull function which you can find here at file exchange. Please note it only supports convex hulls.
If that doesn´t help you maybe you find some inspiration in this discussion.

measure valley width 2d, matlab

I have a 2d image, I have locations where local minimas occurs.
I want to measure the width of the valleys "leading" to those minimas.
I need either the radii of the circles or ellipses fitted to these valley.
An example attached here, dark red lines on the peaks contours is what I wish to find.
Thanks.
I am partially extending the answer of #Lucas.
Given a threshold t I would consider the points P_m that are below t and closer to a certain point m of minimum of your f (given a characteristic scale length r).
(You said your data are noisy; to distinguish minima and talk about wells, you need to estimate such r. In your example it can be for instance r=4, i.e. half the distance between the minima).
Then you have to consider a metric for each well region P_m, say for example
metric(P_m) = .5 * mean{ maximum vertical diameter of P_m ,
maximum horizontal diameter of P_m}.
In your picture metric(P_m) = 2 for both wells.
On the whole, in terms of pseudo-code you may consider
M := set of local minima of f
for_each(minimum m in M){
P_m += {p : d(p,m) < r and f(r)<t} % say that += is the push operation in a Stack
}
radius_of_region_around(m) = metric(P_m); %
I would suggest making a list of points that describe the values at the edge of your ellipse, perhaps by finding all the points where it crosses a threshold.
above = data > threshold
apply a simple edge detector
edges = EdgeDetector(above)
find coordinates of edges
[row,col] = find(edges)
Then apply this ellipse fitter http://www.mathworks.com/matlabcentral/fileexchange/3215-fitellipse
I'm assuming here you have access to the x, y and z data and are not processing a given JPG (or so) image. Then, you can use the function contourc to your advantage:
% plot some example function
figure(1), clf, hold on
[x,y,z] = peaks;
surf(x,y,z+10,'edgecolor', 'none')
grid on, view(44,24)
% generate contour matrix. The last entry is a 2-element vector, the last
% element of which is to ensure the right algorithm gets called (so leave
% it untouched), and the first element is your threshold.
C = contourc(x(1,:), y(:,1), z, [-4 max(z(:))+1]);
% plot the selected points
plot(C(1,2:end), C(2,2:end), 'r.')
Then use this superfast ellipse fitting tool to fit an ellipse through those points and find all the parameters of the ellipse you desire.
I suggest you read help contourc and doc contourc to find out why the above works, and what else you can use it for.