Plotting graphs for point cloud data - matlab

(Example) Point cloud data
Consider a matrix N x 36, where N is the number of points in the point cloud data (3D object) and 36 columns represent the features extracted of which the last 3 columns are (x, y, z) coordinate values of every point.
Now I wish to perform feature analysis in this regard I would like to know how to plot/represent/describe each feature from f1,f2,...f33 over the (x,y,z) coordinates for which they were computed? (to understand the behavior of features)
Is this possible? How? If not, what is the alternative?
Eg: find 2 features file bunny_example with NX5 where N is number of points, 1st and 2nd columns are features 1f1, f21 and 3rd to 5th columns are (x,y,z) coordinate values.

This code may help you:
% Provided example data
features = [2.0684e-05 7.5750e-06 3.8389e-05 1.0346e-05 -8.3302e-06;
0.0002 -1.7019e-05 -0.0002 -3.8879e-05 8.1841e-05;
-2.3888e-05 -3.5798e-05 2.0476e-05 -4.7382e-05 3.8213e-05;
7.7594e-06 2.9854e-06 3.0756e-05 -1.9135e-06 1.3463e-05;
3.4250e-05 5.6627e-06 7.3759e-06 -8.1303e-05 -1.5577e-05;
4.7731e-06 4.9014e-06 2.5750e-05 2.3827e-06 6.2936e-06;
2.4317e-05 0.0007 3.1783e-05 0.0001 -0.0001;
2.6632e-05 0.0009 0.0001 0.0001 -0.0005;
-1.9714e-05 -1.2456e-06 5.5657e-06 1.8092e-05 1.3787e-05];
points = [-0.6011 -0.9712 0.3268;
-0.5721 -0.9712 0.3379;
-0.5721 -0.9854 0.32794;
-0.5817 -0.948 0.3298;
0.0708 -0.583 -0.2528;
-0.5721 -0.9429 0.32794;
-0.312 -0.9940 0.4074;
-0.286 -0.994 0.4174;
-0.0864 0.4534 -0.7729];
% To create the colormap - like heatmap
initial_hsv = [0 1 1]; % red
final_hsv = [2/3 1 1]; % blue
point_size = 10;
max_hsv = max([initial_hsv; final_hsv]);
min_hsv = min([initial_hsv; final_hsv]);
dif_hsv = max_hsv - min_hsv;
% For each feature
for f = 1 : size(features, 2)
f_max = max(features(:, f));
f_min = min(features(:, f));
c_data = zeros(size(points, 1), 3);
% For each point
for p = 1 : size(points, 1)
for hsv_comp = 1:3
if dif_hsv(hsv_comp) != 0
% Logaritmic mapping
c_data(p, hsv_comp) = (1 - ((log(features(p, f)+abs(f_min)+1) - log(f_min+abs(f_min)+1)) / (log(f_max+abs(f_min)+1) - log(f_min+abs(f_min)+1)))) * dif_hsv(hsv_comp) + min_hsv(hsv_comp);
else
c_data(p, hsv_comp) = min_hsv(hsv_comp);
end
end
end
% Converting to HSV
c_data(:,:) = hsv2rgb(c_data(:,:));
% Creating a figure for each feature
str = sprintf('Feature number %d' , f);
figure
scatter3(points(:,1), points(:,2), points(:,3), point_size*ones(size(points,1),1), c_data);
title(str)
end
NOTES:
You can change the values of inital_hsv, final_hsv and point_size according to your needs.
The way the c_data is computed (in this example is used a logaritmic mapping) can also be changed in order to have smother color transitions based on your feature data.

Related

Which Bins are occupied in a 3D histogram in MatLab

I got 3D data, from which I need to calculate properties.
To reduce computung I wanted to discretize the space and calculate the properties from the Bin instead of the individual data points and then reasign the propertie caclulated from the bin back to the datapoint.
I further only want to calculate the Bins which have points within them.
Since there is no 3D-binning function in MatLab, what i do is using histcounts over each dimension and then searching for the unique Bins that have been asigned to the data points.
a5pre=compositions(:,1);
a7pre=compositions(:,2);
a8pre=compositions(:,3);
%% BINNING
a5pre_edges=[0,linspace(0.005,0.995,19),1];
a5pre_val=(a5pre_edges(1:end-1) + a5pre_edges(2:end))/2;
a5pre_val(1)=0;
a5pre_val(end)=1;
a7pre_edges=[0,linspace(0.005,0.995,49),1];
a7pre_val=(a7pre_edges(1:end-1) + a7pre_edges(2:end))/2;
a7pre_val(1)=0;
a7pre_val(end)=1;
a8pre_edges=a7pre_edges;
a8pre_val=a7pre_val;
[~,~,bin1]=histcounts(a5pre,a5pre_edges);
[~,~,bin2]=histcounts(a7pre,a7pre_edges);
[~,~,bin3]=histcounts(a8pre,a8pre_edges);
bins=[bin1,bin2,bin3];
[A,~,C]=unique(bins,'rows','stable');
a5pre=a5pre_val(A(:,1));
a7pre=a7pre_val(A(:,2));
a8pre=a8pre_val(A(:,3));
It seems like that the unique function is pretty time consuming, so I was wondering if there is a faster way to do it, knowing that the line only can contain integer or so... or a totaly different.
Best regards
function [comps,C]=compo_binner(x,y,z,e1,e2,e3,v1,v2,v3)
C=NaN(length(x),1);
comps=NaN(length(x),3);
id=1;
for i=1:numel(x)
B_temp(1,1)=v1(sum(x(i)>e1));
B_temp(1,2)=v2(sum(y(i)>e2));
B_temp(1,3)=v3(sum(z(i)>e3));
C_id=sum(ismember(comps,B_temp),2)==3;
if sum(C_id)>0
C(i)=find(C_id);
else
comps(id,:)=B_temp;
id=id+1;
C_id=sum(ismember(comps,B_temp),2)==3;
C(i)=find(C_id>0);
end
end
comps(any(isnan(comps), 2), :) = [];
end
But its way slower than the histcount, unique version. Cant avoid find-function, and thats a function you sure want to avoid in a loop when its about speed...
If I understand correctly you want to compute a 3D histogram. If there's no built-in tool to compute one, it is simple to write one:
function [H, lindices] = histogram3d(data, n)
% histogram3d 3D histogram
% H = histogram3d(data, n) computes a 3D histogram from (x,y,z) values
% in the Nx3 array `data`. `n` is the number of bins between 0 and 1.
% It is assumed all values in `data` are between 0 and 1.
assert(size(data,2) == 3, 'data must be Nx3');
H = zeros(n, n, n);
indices = floor(data * n) + 1;
indices(indices > n) = n;
lindices = sub2ind(size(H), indices(:,1), indices(:,2), indices(:,3));
for ii = 1:size(data,1)
H(lindices(ii)) = H(lindices(ii)) + 1;
end
end
Now, given your compositions array, and binning each dimension into 20 bins, we get:
[H, indices] = histogram3d(compositions, 20);
idx = find(H);
[x,y,z] = ind2sub(size(H), idx);
reduced_compositions = ([x,y,z] - 0.5) / 20;
The bin centers for H are at ((1:20)-0.5)/20.
On my machine this runs in a fraction of a second for 5 million inputs points.
Now, for each composition(ii,:), you have a number indices(ii), which matches with another number idx[jj], corresponding to reduced_compositions(jj,:). One easy way to make the assignment of results is as follows:
H(H > 0) = 1:numel(idx);
indices = H(indices);
Now for each composition(ii,:), your closest match in the reduced set is reduced_compositions(indices(ii),:).

Color a voronoi diagram in MATLAB according to the color values of the initial points

I have a matrix of points named start_coord containing their x and y coordinates, as well as a column denoting their classification (1-5). I.e. the first row looks like [75, 100, 4].
I've calculated a voronoi diagram of this data using the code below
[vc_x, vc_y] = voronoi(start_coord(:,1), start_coord(:,2));
How would I go about coloring the resulting polygons by the classification value of the point contained within each polygon, i.e. the third column in start_coord?
EDIT
For quick plotting of polygons by color reference the answer in the comments below, which helped inform this edit. For getting the voronoi polygons for thousands of points written to an array that can be saved as an image refer to this code:
new_map = zeros(sm_size(1), sm_size(2));
start_coord = readmatrix(char(join([csv_path, '/', run_types(run), common_name_csv], "")));
sc_size = size(start_coord);
dt = delaunayTriangulation(start_coord(:,1:2));
[V,R] = voronoiDiagram(dt);
for i = 1:sc_size(1)
A=V(R{i},:);
B=A(any(~isinf(A),2),:); % omit points at infinity
bw = poly2mask(B(:,1), B(:,2), sm_size(1), sm_size(2));
new_map(bw == 1) = color_map(start_coord(i,3));
end
new_map can then be saved as an array or converted to RGB and saved as an image.
Use voronoiDiagram to get the polygons.
dt = delaunayTriangulation(start_coord(:,1:2));
[V,R] = voronoiDiagram(dt);
Then R{i} will be the vertices of polygon from start_coord(i,:)
So set the color to start_coord(i,3)'s color and:
A=V(R{i},:);
B=A(any(~isinf(A),2),:); % omit points at infinity
plot(polyshape(B));
The only hiccup there is that the vertices at infinity get chopped off. But maybe that will get you close enough to what you want. If you need to fill to the edge, check out the VoronoiLimit function (which I have not tested).
e.g.:
X = [-1.5 3.2; 1.8 3.3; -3.7 1.5; -1.5 1.3; ...
0.8 1.2; 3.3 1.5; -4.0 -1.0;-2.3 -0.7; ...
0 -0.5; 2.0 -1.5; 3.7 -0.8; -3.5 -2.9; ...
-0.9 -3.9; 2.0 -3.5; 3.5 -2.25];
X(:,3) = [ 1 2 1 3 1 2 2 2 2 3 3 3 3 3 3]';
ccode = ["red","green","blue"];
dt = delaunayTriangulation(X(:,1:2));
[V,R] = voronoiDiagram(dt);
figure
voronoi(X(:,1),X(:,2))
hold on
for i = 1:size(X,1)
A=V(R{i},:);
B=A(any(~isinf(A),2),:);
if(size(B,1)>2)
plot(polyshape(B),'FaceColor',ccode(X(i,3)));
end
end
Result:

How to compute confidence intervals and plot them on a bar plot

How can I plot a bar out of a
data = 1x10 cell
, where each value in the cell has a different dimension like 3x100, 3x40, 66x2 etc.
My goal is to get a bar plot, where I would have 10 group of bars and in every group three bars for each of the values. On the bar, I want it to be shown the median of the values, and I want to calculate the confidence interval and show it additionally.
On this example there are not group of bars, but my point is to show you how I want the confidence intervals shown. On the site, where I found this example they offer a solution where they have this command line
e1 = errorbar(mean(data), ci95);
but I have the problem that it can't find any ci95
So, are there any other effective ways to do it, without installing or downloading additional services?
I've found Patrick Happel's answer to not work because the figure window (and therefore the variable b) gets cleared out by subsequent calls to errorbar. Simply adding a hold on command takes care of this. To avoid confusion, here's a new answer that reproduces all of Patrick's original code, plus my small tweak:
%% Old answer
%Just to be safe, let's clear everything
clear all
data = cell(1,10);
% Random length of the data
l = randi(500, 10, 1) + 50;
% Random "width" of the data, with 3 more likely
w = randi(4, 10, 1);
w(w==4) = 3;
% random "direction" of the data
d = randi(2, 10, 1);
% sigma of the data (in fraction of mean)
sigma = rand(10,1) / 3;
% means of the data
dmean = randi(150,10,1);
dsigma = dmean.*sigma;
for c = 1 : 10
if d(c) == 1
data{c} = randn(l(c), w(c)) .* dsigma(c) + dmean(c);
else
data{c} = randn(w(c), l(c)) .* dsigma(c) + dmean(c);
end
end
%============================================
%Next thing is
% On the bar, I want it to be shown the median of the values, and I
% want to calculate the confidence interval and show it additionally.
%
%Are you really sure you want to plot the median? The median of some data
%is not connected to the variance of the data, and hus no type of error
%bars are required. I guess you want to show the mean. If you really want
%to show the median, a box plot might be a better alternative.
%
%The following code computes and plots the mean in a bar plot:
%============================================
means = zeros(numel(data),3);
stds = zeros(numel(data),3);
n = zeros(numel(data),3);
for c = 1:numel(data)
d = data{c};
if size(d,1) < size(d,2)
d = d';
end
cols = size(d,2);
means(c, 1:cols) = nanmean(d);
stds(c, 1:cols) = nanstd(d);
n(c, 1:cols) = sum(~isnan((d)));
end
b = bar(means);
%% New code
%This ensures that b continues to reference existing data in the next for
%loop, as the graphics objects can otherwise be deleted.
hold on
%% Continuing Patrick Happel's answer
%============================================
%Now, we need to compute the length of the error bars. Typical choices are
%the standard deviation of the data (already computed by the code above,
%stored in stds), the standard error or the 95% confidence interval (which
%is the 1.96fold of the standard error, assuming the underlying data
%follows a normal distribution).
%============================================
% for standard deviation use stds
% for standard error
ste = stds./sqrt(n);
% for 95% confidence interval
ci95 = 1.96 * ste;
%============================================
%Last thing is to plot the error bars. Here I chose the ci95 as you asked
%in your question, if you want to change that, simply change the variable
%in the call to errorbar:
%============================================
for c = 1:3
size(means(:, c))
size(b(c).XData)
e = errorbar(b(c).XData + b(c).XOffset, means(:,c), ci95(:, c));
e.LineStyle = 'none';
end
Since I am not sure how your data looks like, since in your question you stated that the elements of the cell contain data with different dimension like
3x100, 3x40, 66x2
I assume that your data can be arranged in columns or rows and that not all data requires three bars.
Since you did not provide a short piece of your data for us to test, I generate some artificial data:
data = cell(1,10);
% Random length of the data
l = randi(500, 10, 1) + 50;
% Random "width" of the data, with 3 more likely
w = randi(4, 10, 1);
w(w==4) = 3;
% random "direction" of the data
d = randi(2, 10, 1);
% sigma of the data (in fraction of mean)
sigma = rand(10,1) / 3;
% means of the data
dmean = randi(150,10,1);
dsigma = dmean.*sigma;
for c = 1 : 10
if d(c) == 1
data{c} = randn(l(c), w(c)) .* dsigma(c) + dmean(c);
else
data{c} = randn(w(c), l(c)) .* dsigma(c) + dmean(c);
end
end
Next thing is
On the bar, I want it to be shown the median of the values, and I want to calculate the confidence interval and show it additionally.
Are you really sure you want to plot the median? The median of some data is not connected to the variance of the data, and hus no type of error bars are required. I guess you want to show the mean. If you really want to show the median, a box plot might be a better alternative.
The following code computes and plots the mean in a bar plot:
means = zeros(numel(data),3);
stds = zeros(numel(data),3);
n = zeros(numel(data),3);
for c = 1:numel(data)
d = data{c};
if size(d,1) < size(d,2)
d = d';
end
cols = size(d,2);
means(c, 1:cols) = nanmean(d);
stds(c, 1:cols) = nanstd(d);
n(c, 1:cols) = sum(~isnan((d)));
end
b = bar(means);
Now, we need to compute the length of the error bars. Typical choices are the standard deviation of the data (already computed by the code above, stored in stds), the standard error or the 95% confidence interval (which is the 1.96fold of the standard error, assuming the underlying data follows a normal distribution).
% for standard deviation use stds
% for standard error
ste = stds./sqrt(n);
% for 95% confidence interval
ci95 = 1.96 * ste;
Last thing is to plot the error bars. Here I chose the ci95 as you asked in your question, if you want to change that, simply change the variable in the call to errorbar:
for c = 1:3
size(means(:, c))
size(b(c).XData)
e = errorbar(b(c).XData + b(c).XOffset, means(:,c), ci95(:, c));
e.LineStyle = 'none';
end

plot interpolate a curve between 2 different types of curves in matlab

I have the following data that predicts a curve in the middle of the two curves which has different equation and datas.I also need to spline and smothen the curve of the middle curve
I've tried searching other codes here in stackoverflow but this is the most close to the right solution. So far the plot for the two curve is right but the interpolated point gives me wrong plot.
Im trying to find the plot for val=30 given that (a25,vel25)=25 and (a50,vel50)=50. Please help me troubleshoot and get a table of data (x,y) for the generated interpolated curve. Thanks for your help
generated plot using this program
a50=[1.05
0.931818182
0.931818182
0.968181818
1.045454545
1.136363636
1.354545455
1.568181818
1.718181818
1.945454545
2.159090909
2.454545455
2.772727273
];
vel50=[0.85
0.705555556
0.605555556
0.533333333
0.472222222
0.45
0.427777778
0.45
0.477777778
0.533333333
0.611111111
0.711111111
0.827777778
];
a25=[0.5
0.613636364
0.686363636
0.795454545
0.918181818
0.963636364
1.090909091
1.236363636
1.304545455
1.431818182
1.545454545
1.659090909
1.818181818
];
vel25=[0.425555556
0.354444444
0.302222222
0.266666667
0.233333333
0.226666667
0.211111111
0.222222222
0.237777778
0.266666667
0.311111111
0.35
0.402222222
];
plot(a25,vel25,'b-');
hold on
plot(a50,vel50,'g-');
minX = min([a25 a50]);
maxX = max([a25,a50]);
xx = linspace(minX,maxX,100);
vel25_inter = interp1(a25,vel25,xx);
vel50_inter = interp1(a50,vel50,xx);
val = 30; % The interpolated point
interpVel = vel25_inter + ((val-25).*(vel50_inter-vel25_inter))./(50-25);
plot(xx,interpVel,'r-');
The question and answer linked in comment still apply and can be a solution.
In your case, it is not so direct because your data are not on the same grid and some are not monotonic, but once they are packaged properly, the easiest solution is still to use griddata.
By packaged properly, I mean finding the maximum common interval (on x, or what you call a), so the data can be interpolated between curve without producing NaNs.
This seems to work:
The red dashed line is the values interpolated at val=30, all the other lines are interpolations for values between 25 to 50.
The code to get there:
% back up original data, just for final plot
bkp_a50 = a50 ; bkp_vel50 = vel50 ;
% make second x vector monotonic
istart = find( diff(a50)>0 , 1 , 'first') ;
a50(1:istart-1) = [] ;
vel50(1:istart-1) = [] ;
% prepare a 3rd dimension vector (from 25 to 50)
T = [repmat(25,size(a25)) ; repmat(50,size(a50)) ] ;
% merge all observations together
A = [ a25 ; a50] ;
V = [vel25 ; vel50] ;
% find the minimum domain on which data can be interpolated
% (anything outside of that will return NaN)
Astart = max( [min(a25) min(a50)] ) ;
Astop = min( [max(a25) max(a50)] ) ;
% use the function 'griddata'
[TI,AI] = meshgrid( 25:50 , linspace(Astart,Astop,10) ) ;
VI = griddata(T,A,V,TI,AI) ;
% plot all the intermediate curves
plot(AI,VI)
hold on
% the original curves
plot(a25,vel25,'--k','linewidth',2)
plot(bkp_a50,bkp_vel50,'--k','linewidth',2)
% Highlight the curve at T = 30 ;
c30 = find( TI(1,:) == 30 ) ;
plot(AI(:,c30),VI(:,c30),'--r','linewidth',2)
There were quite a few issues with your code that's why it was not executing properly. I have just made very little changes to your code and made it running,
clc
%13
a50=[1.05
0.931818182
0.932
0.968181818
1.045454545
1.136363636
1.354545455
1.568181818
1.718181818
1.945454545
2.159090909
2.454545455
2.772727273
];
%13
vel50=[0.85
0.705555556
0.605555556
0.533333333
0.472222222
0.45
0.427777778
0.45
0.477777778
0.533333333
0.611111111
0.711111111
0.827777778
];
%13
a25=[0.5
0.613636364
0.686363636
0.795454545
0.918181818
0.963636364
1.090909091
1.236363636
1.304545455
1.431818182
1.545454545
1.659090909
1.818181818
];
%13
vel25=[0.425555556
0.354444444
0.302222222
0.266666667
0.233333333
0.226666667
0.211111111
0.222222222
0.237777778
0.266666667
0.311111111
0.35
0.402222222
];
plot(a25,vel25,'b-');
hold on
plot(a50,vel50,'g-');
minX = min([a25 a50])
maxX = max([a25 a50])
%xx = linspace(minX,maxX);
xx = linspace(0.5,2.7727,100);
vel25_inter = interp1(a25,vel25,xx);
vel50_inter = interp1(a50,vel50,xx);
val = 30; % The interpolated point
interpVel = vel25_inter + ((val-25).*(vel50_inter-vel25_inter))./(50-25);
plot(xx,interpVel,'r-');
There issues were
The interval on which you wanted to interpolate is xx = linspace(minX,maxX); but it gives an error of the type,
Matrix dimensions must agree.
Because you were assigning two values for the starting point and two for the ending point. So I replace it with xx = linspace(0.5,2.7727,100); where the starting point is being the minimum of the two minimum minX and the same for maxX
There are values of a50 (0.931818182) that repeat which was producing the following error
The grid vectors are not strictly monotonic increasing.
I change one of the value and replaced it with 0.932
The output is not that promising but is suppose the one you want?

How to loop the potential?

I am currently working on a molecular dynamics simulation of polymers in solution, this is one of the subroutines which calculates the potential energy of the system and the force exerted on each monomer.
function eval_force()
% eval_force.m IS USED FOR EVALUATING FORCE
% THE STRATEGY USUALLY ADOPTED FOR A LENNARD-JONES OR AS A MATTER OF FACT
% ANY PAIR-WISE INTERACTING SYSTEM IS AS FOLLOWS:
% 1. EVALUATE THE DISTANCE BETWEEN TWO PAIRS OF ATOMS
% 2. ENSURE THAT MINIMUM IMAGE CONVENTION (MIC) IS FOLLOWED
% 3. IF THE DISTANCE OBTAINED THROUGH MIC IS GREATER THAN THE CUT OFF
% DISTANCE MOVE TO NEXT PAIR
% 4. ELSE EVALUATE POTENTIAL ENERGY AND CALCULATE FORCE COMPONENTS
% 5. F(i,j) = -F(j,i)
global MASS KB TEMPERATURE NUM_ATOMS LENGTH TSTEP;
global EPS SIG R_CUT GAMMA POT_E;
global POSITION VELOCITY FORCE STO;
dr = zeros(3,1);
drh = zeros(3,1);
FORCE(:) = 0.0;
POT_E = 0.0;
for ( i=1:NUM_ATOMS )
for ( j=i+1:NUM_ATOMS )
dist2 = 0.0; % VARIABLE dist2 STORES DISTANCE BETWEEN PAIR (i,j)
% FIRST FIND OUT THE DIFFERENCE IN X,Y AND Z COORDINATES
% VARIABLE dr IS USED FOR THIS PURPOSE
for(k = 1:3)
dr(k) = POSITION(i,k) - POSITION(j,k);
% THESE STEPS ENSURE MINIMUM IMAGE CONVENTION IS FOLLOWED
if(dr(k) > LENGTH/2.0)
dr(k) = dr(k) - LENGTH;
end
if(dr(k) < -LENGTH/2.0)
dr(k) = dr(k) + LENGTH;
end
% MINIMUM IMAGE CONVENTION ENDS HERE
dist2 = dist2 + dr(k)*dr(k); % dist2 IS BASED UPON MIC
end
if(dist2 <= R_CUT*R_CUT) % IF THE CUT OFF CRITERIA IS SATISFIED
dist2i = power(SIG,2)/dist2;
dist6i = power(dist2i,3);
dist12i = power(dist6i,2);
POT_E = POT_E + EPS * (dist12i - 2*dist6i) + 33.34 * EPS * power(sqrt(dist2) - SIG,2)/(2 * power(SIG,2)); % STORES THE POTENTIAL ENERGY
Ff = 12.0 * EPS * (dist12i-dist6i) - 33.34 * EPS * (sqrt(dist2) - SIG)/(dist2i * sqrt(dist2) * power(SIG,2));
Ff = Ff * dist2i;
for(k = 1:3)
FORCE(i,k) = FORCE(i,k) + Ff*dr(k)- GAMMA*VELOCITY(i,k);
FORCE(j,k) = FORCE(j,k) - Ff*dr(k)- GAMMA*VELOCITY(j,k);
end
end
end
end
end
How can I make a loop for the "33.34 * EPS * power(sqrt(dist2) - SIG,2)/(2 * power(SIG,2))" part under POT_E which is the harmonic potential, so it only evaluates the distance between atoms for the nearest ones (for j=i+1 to 4).
After having a quick look at your code, I'd suggest the following ideas:
Calculate all-to-all distances at once using pdist and store it in a matrix, say AllDist.
The conditions based on LENGTH can be applied directly on AllDist.
Since you need to find four nearest neighbors, you need to have a single loop iterating through the rows (or columns) of AllDist, which sorts the current row (or column) and gives you four nearest neighbors. Note that for each atom, you'd get 0 as the nearest distance as this is the "self-distance". Ignore this.
If you have access to Matlab's Parallel Computing Toolbox, try to use it (parfor) where appropriate to accelerate your simulation.