Optimize sum of neighbor points 3D - matlab

I have a materials matrix where the values indicate the type of material (value between 1 and 8). Each value below 5 indicates an "interesting" material. Now at a certain point, i want to sum up the amount of non-interesting neighbor materials. So in a 3D-matrix the result at one point can be value between 0 and 6. One of the problems is that the "current" point is at the edge of the 3D matrix. I can solve this using 3 very expensive for-loops:
materials; % given 3D matrix i.e. 97*87*100
matrixSize = size(materials);
n = matrixSize(1)*matrixSize(2)*matrixSize(3); * total number of points
materialsFlattened = reshape(materials, [n 1]); % flattened materials matrix from a 3D matrix to a 1D matrix
pageSize = matrixSize(1)*matrixSize(2); % size of a page in z-direction
interestingMaterials = materialsFlattened(:) < 5; % logical vector indicating if the materials are interesting
n_bc = zeros(obj.n, 1); % amount of neighbour non-interesting materials
for l = 1:matrixSize(3) % loop over all z
for k = 1:matrixSize(2) % loop over all y
for j = 1:matrixSize(1) % loop over all x
n_bc(sub2ind(matrixSize,j,k,l)) = ...
~interestingMaterials(sub2ind(matrixSize,j,k,max(1, l-1)))...
+ ~interestingMaterials(sub2ind(matrixSize,j,max(1,k-1),l))...
+ ~interestingMaterials(sub2ind(matrixSize,max(1, j-1),k,l))...
+ ~interestingMaterials(sub2ind(matrixSize,min(matrixSize(1),j+1),k,l))...
+ ~interestingMaterials(sub2ind(matrixSize,j,min(matrixSize(2),k+1),l))...
+ ~interestingMaterials(sub2ind(matrixSize,j,k,min(matrixSize(3),l+1)));
end
end
end
So note that i first flatten the matrix to a 1D matrix using reshape. The min and max operators ensure that i do not go out of the bounds of the matrix; instead i take the value of the material where i currently am. For my application, speed is of the essence and i was hoping i can get rid of this ugly loop in loop structure. Often times that is possible in MATLAB, as the element-wise indexing is amazing and sometimes kinda magic.

Related

How to Deal with Edge Cases: For Loops and Modulo

I'm trying to apply bare-bones image processing to images like this: My for-loop does exactly what I want it to: it allows me to find the pixels of highest intensity, and also remember the coordinates of that pixel. However, the code breaks whenever it encounters a multiple of rows – which in this case is equal to 18.
For example, the length of this image (rows * columns of image) is 414. So there are 414/18 = 23 cases where the program fails (i.e., the number of columns).
Perhaps there is a better way to accomplish my goal, but this is the only way I could think of sorting an image by pixel intensity while also knowing the coordinates of each pixel. Happy to take suggestions of alternative code, but it'd be great if someone had an idea of how to handle the cases where mod(x,18) = 0 (i.e., when the index of the vector is divisible by the total # of rows).
image = imread('test.tif'); % feed program an image
image_vector = image(:); % vectorize image
[sortMax,sortIndex] = sort(image_vector, 'descend'); % sort vector so
%that highest intensity pixels are at top
max_sort = [];
[rows,cols] = size(image);
for i=1:length(image_vector)
x = mod(sortIndex(i,1),rows); % retrieve original coordinates
% of pixels from matrix "image"
y = floor(sortIndex(i,1)/rows) +1;
if image(x,y) > 0.5 * max % filter out background noise
max_sort(i,:) = [x,y];
else
continue
end
end
You know that MATLAB indexing starts at 1, because you do +1 when you compute y. But you forgot to subtract 1 from the index first. Here is the correct computation:
index = sortIndex(i,1) - 1;
x = mod(index,rows) + 1;
y = floor(index/rows) + 1;
This computation is performed by the function ind2sub, which I recommend you use.
Edit: Actually, ind2sub does the equivalent of:
x = rem(sortIndex(i,1) - 1, rows) + 1;
y = (sortIndex(i,1) - x) / rows + 1;
(you can see this by typing edit ind2sub. rem and mod are the same for positive inputs, so x is computed identically. But for computing y they avoid the floor, I guess it is slightly more efficient.
Note also that
image(x,y)
is the same as
image(sortIndex(i,1))
That is, you can use the linear index directly to index into the two-dimensional array.

Get Matrix of minimum coordinate distance to point set

I have a set of points or coordinates like {(3,3), (3,4), (4,5), ...} and want to build a matrix with the minimum distance to this point set. Let me illustrate using a runnable example:
width = 10;
height = 10;
% Get min distance to those points
pts = [3 3; 3 4; 3 5; 2 4];
sumSPts = length(pts);
% Helper to determine element coordinates
[cols, rows] = meshgrid(1:width, 1:height);
PtCoords = cat(3, rows, cols);
AllDistances = zeros(height, width,sumSPts);
% To get Roh_I of evry pt
for k = 1:sumSPts
% Get coordinates of current Scribble Point
currPt = pts(k,:);
% Get Row and Col diffs
RowDiff = PtCoords(:,:,1) - currPt(1);
ColDiff = PtCoords(:,:,2) - currPt(2);
AllDistances(:,:,k) = sqrt(RowDiff.^2 + ColDiff.^2);
end
MinDistances = min(AllDistances, [], 3);
This code runs perfectly fine but I have to deal with matrix sizes of about 700 milion entries (height = 700, width = 500, sumSPts = 2k) and this slows down the calculation. Is there a better algorithm to speed things up?
As stated in the comments, you don't necessary have to put everything into a huge matrix and deal with gigantic matrices. You can :
1. Slice the pts matrix into reasonably small slices (say of length 100)
2. Loop on the slices and calculate the Mindistances slice over these points
3. Take the global min
tic
Mindistances=[];
width = 500;
height = 700;
Np=2000;
pts = [randi(width,Np,1) randi(height,Np,1)];
SliceSize=100;
[Xcoords,Ycoords]=meshgrid(1:width,1:height);
% Compute the minima for the slices from 1 to floor(Np/SliceSize)
for i=1:floor(Np/SliceSize)
% Calculate indexes of the next slice
SliceIndexes=((i-1)*SliceSize+1):i*SliceSize
% Get the corresponding points and reshape them to a vector along the 3rd dim.
Xpts=reshape(pts(SliceIndexes,1),1,1,[]);
Ypts=reshape(pts(SliceIndexes,2),1,1,[]);
% Do all the diffs between your coordinates and your points using bsxfun singleton expansion
Xdiffs=bsxfun(#minus,Xcoords,Xpts);
Ydiffs=bsxfun(#minus,Ycoords,Ypts);
% Calculate all the distances of the slice in one call
Alldistances=bsxfun(#hypot,Xdiffs,Ydiffs);
% Concatenate the mindistances
Mindistances=cat(3,Mindistances,min(Alldistances,[],3));
end
% Check if last slice needed
if mod(Np,SliceSize)~=0
% Get the corresponding points and reshape them to a vector along the 3rd dim.
Xpts=reshape(pts(floor(Np/SliceSize)*SliceSize+1:end,1),1,1,[]);
Ypts=reshape(pts(floor(Np/SliceSize)*SliceSize+1:end,2),1,1,[]);
% Do all the diffs between your coordinates and your points using bsxfun singleton expansion
Xdiffs=bsxfun(#minus,Xcoords,Xpts);
Ydiffs=bsxfun(#minus,Ycoords,Ypts);
% Calculate all the distances of the slice in one call
Alldistances=bsxfun(#hypot,Xdiffs,Ydiffs);
% Concatenate the mindistances
Mindistances=cat(3,Mindistances,min(Alldistances,[],3));
end
% Get global minimum
Mindistances=min(Mindistances,[],3);
toc
Elapsed time is 9.830051 seconds.
Note :
You'll not end up doing less calculations. But It will be a lot less intensive for your memory (700M doubles takes 45Go in memory), thus speeding up the process (With the help of vectorizing aswell)
About bsxfun singleton expansion
One of the great strength of bsxfun is that you don't have to feed it matrices whose values are along the same dimensions.
For example :
Say I have two vectors X and Y defined as :
X=[1 2]; % row vector X
Y=[1;2]; % Column vector Y
And that I want a 2x2 matrix Z built as Z(i,j)=X(i)+Y(j) for 1<=i<=2 and 1<=j<=2.
Suppose you don't know about the existence of meshgrid (The example is a bit too simple), then you'll have to do :
Xs=repmat(X,2,1);
Ys=repmat(Y,1,2);
Z=Xs+Ys;
While with bsxfun you can just do :
Z=bsxfun(#plus,X,Y);
To calculate the value of Z(2,2) for example, bsxfun will automatically fetch the second value of X and Y and compute. This has the advantage of saving a lot of memory space (No need to define Xs and Ys in this example) and being faster with big matrices.
Bsxfun Vs Repmat
If you're interested with comparing the computational time between bsxfun and repmat, here are two excellent (word is not even strong enough) SO posts by Divakar :
Comparing BSXFUN and REPMAT
BSXFUN on memory efficiency with relational operations

Sample 1D vectors from 3D array using a vector of points

I have a n channel image and I have a 100x2 matrix of points (in my case n is 20 but perhaps it is more clear to think of this as a 3 channel image). I need to sample the image at each point and get an nx100 array of these image points.
I know how to do this with a for loop:
for j = 1:100
samples(j,:) = image(points(j,1),points(j,2),:);
end
How would I vectorize this? I have tried
samples = image(points);
but this gives 200 samples of 20 channels. And if I try
samples = image(points,:);
this gives me 200 samples of 4800 channels. Even
samples = image(points(:,1),points(:,2));
gives me 100 x 100 samples of 20 (one for each possible combination of x in X and y in Y)
A concise way to do this would be to reshape your image so that you force your image that was [nRows, nCols, nChannels] to be [nRows*nCols, nChannels]. Then you can convert your points array into a linear index (using sub2ind) which will correspond to the new "combined" row index. Then to grab all channels, you can simply use the colon operator (:) for the second dimension which now represents the channels.
% Determine the new row index that will correspond to each point after we reshape it
sz = size(image);
inds = sub2ind(sz([1, 2]), points(:,2), points(:,1));
% Do the reshaping (i.e. flatten the first two dimensions)
reshaped_image = reshape(image, [], size(image, 3));
% Grab the pixels (rows) that we care about for all channels
newimage = reshaped_image(inds, :);
size(newimage)
100 20
Now you have the image sampled at the points you wanted for all channels.

how to create a 3d spatial density map?

I have a time-dependent system of varying number of particles (~100k particles). In fact, each particle represents an interaction in a 3D space with a particular strength. Thus, each particle has (X,Y,Z;w) which is the coordinate plus a weight factor between 0 and 1, showing the strength of interaction in that coordinate.
Here http://pho.to/9Ztti I have uploaded 10 real-time snapshots of the system, with particles are represented as reddish small dots; the redder the dot, the stronger the interaction is.
The question is: how one can produce a 3D (spatial) density map of these particles, preferably in Matlab or Origin Pro 9 or ImageJ? Is there a way to, say, take the average of these images based on the red-color intensity in ImageJ?
Since I have the numerical data for particles (X,Y,Z;w) I can analyze those data in other software as well. So, you are welcome to suggest any other analytical approach/software
Any ideas/comments are welcome!
Assuming your data is in 3D continuous space and your dataset is just a list of the 3d positions of each particle interaction, it sounds like you want to make a 4D weighted histogram. You'll have to chop the 3d space into bins and sum the weighted points in each bin over time, then plot the results in a single 3d plot where color represents the summed weighted interactions over time.
Heres an example with randomly generated particle interactions:`
%% Create dataSet of random particle interations in 3d space
for i=1:5000
if i == 1
dataSet = [rand()*100 rand()*100 rand()*100 rand() i];
else
dataSet(i,:) = [rand()*100 rand()*100 rand()*100 rand() i];
end
end
% dataSet = [x y z interactionStrength imageNumber]
xLimits = [min(dataSet(:,1)) max(dataSet(:,1))];
yLimits = [min(dataSet(:,2)) max(dataSet(:,2))];
zLimits = [min(dataSet(:,3)) max(dataSet(:,3))];
binSize = 10; % Number of bins to split each spatial dimention into
binXInterval = (xLimits(2)-xLimits(1))/binSize;
binYInterval = (yLimits(2)-yLimits(1))/binSize;
binZInterval = (zLimits(2)-zLimits(1))/binSize;
histo = [];
for i=xLimits(1)+(binSize/2):binXInterval:xLimits(2) + (binSize/2)
for j=yLimits(1)+(binSize/2):binYInterval:yLimits(2) + (binSize/2)
for k=zLimits(1)+(binSize/2):binZInterval:zLimits(2) + (binSize/2)
%% Filter out particle interactions found within the current spatial bin
idx = find((dataSet(:,1) > (i - binSize)) .* (dataSet(:,1) < i));
temp = dataSet(idx,:);
idx = find((temp(:,2) > (j - binSize)) .* (temp(:,2) < j));
temp = temp(idx,:);
idx = find((temp(:,3) > (k - binSize)) .* (temp(:,3) < k));
temp = temp(idx,:);
%% Add up all interaction strengths found within this bin
histo = [histo; i j k sum(temp(:,4))];
end
end
end
%% Remove bins with no particle interactions
idx = find(histo(:,4)>0);
histo = histo(idx,:);
numberOfImages = max(dataSet(:,5));
%% Plot result
PointSizeMultiplier = 100000;
scatter3(histo(:,1).*binXInterval + xLimits(1),histo(:,2).*binYInterval + yLimits(1),histo(:,3).*binZInterval + zLimits(1),(histo(:,4)/numberOfImages)*PointSizeMultiplier,(histo(:,4)/numberOfImages));
colormap hot;
%Size and color represent the average interaction intensity over time
4D histogram made from 10000 randomly generated particle interactions. Each axis divided into 10 bins. Size and color represent summed particle interactions in each bin over time:
If your system can handle the matrix in Matlab it could be as easy as
A = mean(M, 4);
Assuming M holds the 4D compilation of your images then A would be your map.
One way would be to use a 3D scatter (bubble) plot, with variable circle/bubble sizes, proportional to the intensity of your particle.
Here is a simulated example:
N = 1e4; % number of particles
X = randn(N,1); % randomly generated coordinates
Y = 2*randn(N,1);
Z = 0.5*randn(N,1);
S = exp(-sqrt(X.^2 + Y.^2 + Z.^2)); % bubble size vector
scatter3(X,Y,Z,S*200)
end
Here I have randomly generated values for X, Y and Z, while S is reversely proportional to the distance from the center of the cloud.
In your case, if we assume that the (X,Y,Z,w) values are stored in a 2D array called Particles, it would be:
X = Particles(:,1);
Y = Particles(:,2);
Z = Particles(:,3);
S = Particles(:,4);
Hope that helped.

Matlab Vectorization of Multivariate Gaussian Basis Functions

I have the following code for calculating the result of a linear combination of Gaussian functions. What I'd really like to do is to vectorize this somehow so that it's far more performant in Matlab.
Note that y is a column vector (output), x is a matrix where each column corresponds to a data point and each row corresponds to a dimension (i.e. 2 rows = 2D), variance is a double, gaussians is a matrix where each column is a vector corresponding to the mean point of the gaussian and weights is a row vector of the weights in front of each gaussian. Note that the length of weights is 1 bigger than gaussians as weights(1) is the 0th order weight.
function [ y ] = CalcPrediction( gaussians, variance, weights, x )
basisFunctions = size(gaussians, 2);
xvalues = size(x, 2);
if length(weights) ~= basisFunctions + 1
ME = MException('TRAIN:CALC', 'The number of weights should be equal to the number of basis functions plus one');
throw(ME);
end
y = weights(1) * ones(xvalues, 1);
for xIdx = 1:xvalues
for i = 1:basisFunctions
diff = x(:, xIdx) - gaussians(:, i);
y(xIdx) = y(xIdx) + weights(i+1) * exp(-(diff')*diff/(2*variance));
end
end
end
You can see that at the moment I simply iterate over the x vectors and then the gaussians inside 2 for loops. I'm hoping that this can be improved - I've looked at meshgrid but that seems to only apply to vectors (and I have matrices)
Thanks.
Try this
diffx = bsxfun(#minus,x,permute(gaussians,[1,3,2])); % binary operation with singleton expansion
diffx2 = squeeze(sum(diffx.^2,1)); % dot product, shape is now [XVALUES,BASISFUNCTIONS]
weight_col = weights(:); % make sure weights is a column vector
y = exp(-diffx2/2/variance)*weight_col(2:end); % a column vector of length XVALUES
Note, I changed diff to diffx since diff is a builtin. I'm not sure this will improve performance as allocating arrays will offset increase by vectorization.