My dataset is composed of 40 HDR images. Each of them is 128x128x18. I have loaded my data, but now I want to erase the black voxels inside the images. In particular, I set a threshold of 250 (because I took the gray scale and I know it's correct) but I have some problems doing that.
My idea is to write:
threshold = 250;
data(data < threshold) = 0;
% Get linear indices of non-zero voxels
nonZeroIndices = find(data);
% Convert linear indices to (x, y, z) coordinates
[x, y, z] = ind2sub(size(data), nonZeroIndices);
vector=[x,y,z];
But in this way I obtain a vector with 3 columns and so many rows. Should I expect a matrix of reduced dimension, like 56x56x18?
Related
The function to perform an N-dimensional convolution of arrays A and B in matlab is shown below:
C = convn(A,B) % returns the N-dimensional convolution of arrays A and B.
I am interested in a 3-D convolution with a Gaussian filter.
If A is a 3 x 5 x 6 matrix, what do the dimensions of B have to be?
The dimensions of B can be anything you want. There is no set restriction in terms of size. For the Gaussian filter, it can be 1D, 2D or 3D. In 1D, what will happen is that each row gets filtered independently. In 2D, what will happen is that each slice gets filtered independently. Finally, in 3D you will be doing what is expected in 3D convolution. I am assuming you would like a full 3D convolution, not just 1D or 2D.
You may be interested in the output size of convn. If you refer to the documentation, given the two N dimensional matrices, for each dimension k of the output and if nak is the size of dimension k for the matrix A and nbk is the size of dimension k for matrix B, the size of dimension of the output matrix C or nck is such that:
nck = max([nak + nbk - 1, nak, nbk])
nak + nbk - 1 is straight from convolution theory. The final output size of a dimension is simply the sum of the two sizes in dimension k subtracted by 1. However should this value be smaller than either of nak or nbk, we need to make sure that the output size is compatible so that any of the input matrices can fit in the final output. This is why you have the final output size and bounded by both A and B.
To make this easier, you can set the size of the filter guided by the standard deviation of the distribution. I would like to refer you to my previous Stack Overflow post: By which measures should I set the size of my Gaussian filter in MATLAB?
This determines what the output size of a Gaussian filter should be given a standard deviation.
In 2D, the dimensions of the filter are N x N, such that N = ceil(6*sigma + 1) with sigma being the desired standard deviation. Therefore, you would allocate a 3D matrix of size N x N x N with N = ceil(6*sigma + 1);.
Therefore, the code you would want to use to create a 3D Gaussian filter would be something like this:
% Example input
A = rand(3, 5, 6);
sigma = 0.5; % Example
% Find size of Gaussian filter
N = ceil(6*sigma + 1);
% Define grid of centered coordinates of size N x N x N
[X, Y, Z] = meshgrid(-N/2 : N/2);
% Compute Gaussian filter - note normalization step
B = exp(-(X.^2 + Y.^2 + Z.^2) / (2.0*sigma^2));
B = B / sum(B(:));
% Convolve
C = convn(A, B);
One final note is that if the filter you provide has any of its dimensions that are beyond the size of the input matrix A, you will get a matrix using the constraints of each nck value, but then the border elements will be zeroed due to zero-padding.
I have a n channel image and I have a 100x2 matrix of points (in my case n is 20 but perhaps it is more clear to think of this as a 3 channel image). I need to sample the image at each point and get an nx100 array of these image points.
I know how to do this with a for loop:
for j = 1:100
samples(j,:) = image(points(j,1),points(j,2),:);
end
How would I vectorize this? I have tried
samples = image(points);
but this gives 200 samples of 20 channels. And if I try
samples = image(points,:);
this gives me 200 samples of 4800 channels. Even
samples = image(points(:,1),points(:,2));
gives me 100 x 100 samples of 20 (one for each possible combination of x in X and y in Y)
A concise way to do this would be to reshape your image so that you force your image that was [nRows, nCols, nChannels] to be [nRows*nCols, nChannels]. Then you can convert your points array into a linear index (using sub2ind) which will correspond to the new "combined" row index. Then to grab all channels, you can simply use the colon operator (:) for the second dimension which now represents the channels.
% Determine the new row index that will correspond to each point after we reshape it
sz = size(image);
inds = sub2ind(sz([1, 2]), points(:,2), points(:,1));
% Do the reshaping (i.e. flatten the first two dimensions)
reshaped_image = reshape(image, [], size(image, 3));
% Grab the pixels (rows) that we care about for all channels
newimage = reshaped_image(inds, :);
size(newimage)
100 20
Now you have the image sampled at the points you wanted for all channels.
I need to extract image patches of size s x s x 3 around specified 2D locations from an image (3 channels).
How can I do this efficiently without a for loop? I know I can extract one patch around (x,y) location as:
apatch = I(y-s/2:y+s/2, x-s/2:x+s/2, :)
How can I do this for many patches? I know I can use MATLAB's function blockproc but I can't specify the locations.
You can use im2col from the image processing toolbox to transform each pixel neighbourhood into a single column. The pixel neighbourhoods are selected such that each block is chose on a column-basis, which means that the blocks are constructed by traversing down the rows first, then proceeding to the next column and getting the neighbourhoods there.
You call im2col this way:
B = im2col(A, [M N]);
I'm assuming you'll want sliding / overlapping neighbourhoods and not distinct neighbourhoods, which are what is normally used when performing any kind of image filtering. A is your image and you want to find M x N pixel neighbourhoods transformed as columns. B would be the output where each neighbourhood is a single column and horizontally-tiled together. However, you'll probably want to handle the case where you want to grab pixel neighbourhoods along the borders of the image. In this case, you'll want to pad the image first. We're going to assume that M and N are odd to allow the padding to be easier. Specifically, you want to be sure that there are floor(M/2) rows padded on top of the image as well as the bottom as well as floor(N/2) columns padded to the left of the image as well as the right. As such, we should pad A first by using padarray. Let's assume that the border pixels will be replicated, which means that the padded rows and columns will simply be those grabbed from the top or bottom row, or the left and right column, depending on where we need to pad. Therefore:
Apad = padarray(A, floor([M N]/2), 'replicate');
For the next part, if you want to choose specify neighbourhoods, you can use sub2ind to convert your 2D co-ordinates into linear indices so you can select the right columns to get the right pixel blocks. However, because you have a colour image, you'll want to perform im2col on each colour channel. Unfortunately, im2col only works on grayscale images, and so you'd have to repeat this for each channel in your image.
As such, to get ready for patch sampling, do something like this:
B = arrayfun(#(x) im2col(Apad(:,:,x), [M N]), 1:size(A,3), 'uni', 0);
B = cat(3, B{:});
The above code will create a 3D version of im2col, where each 3D slice would be what im2col produces for each colour channel. Now, we can use sub2ind to convert your (x,y) co-ordinates into linear indices so that we can choose which pixel neighbourhoods we want. Therefore, assuming your positions are stored in vectors x and y, you would do something like this:
%// Generate linear indices
ind = sub2ind([size(A,1) size(A,2)], y, x);
%// Select neighbourhoods
%// Should be shaped as a MN x len(ind) x 3 matrix
neigh = B(:,ind,:);
%// Create cell arrays for each patch
patches = arrayfun(#(x) reshape(B(:,x,:), [M N 3]), 1:numel(ind), 'uni', 0);
patches will be a cell array where each element contains your desired patch at each location of (x,y) that you specify. Therefore, patches{1} would be the patch located at (x(1), y(1)), patches{2} would be the patch located at (x(2), y(2)), etc. For your copying and pasting pleasure, this is what we have:
%// Define image, M and N here
%//...
%//...
Apad = padarray(A, floor([M N]/2), 'replicate');
B = arrayfun(#(x) im2col(Apad(:,:,x), [M N]), 1:size(A,3), 'uni', 0);
B = cat(3, B{:});
ind = sub2ind([size(A,1) size(A,2)], y, x);
neigh = B(:,ind,:);
patches = arrayfun(#(x) reshape(neigh(:,x,:), [M N 3]), 1:numel(ind), 'uni', 0);
As unexpected as this may seem, but for me the naive for-loop is actually the fastest. This might depend on your version of MATLAB though, as with newer versions they keep on improving the JIT compiler.
Common data:
A = rand(30, 30, 3); % Image
I = [5,2,3,21,24]; % I = y
J = [3,7,5,20,22]; % J = x
s = 3; % Block size
Naive approach: (faster than im2col and arrayfun!)
Patches = cell(size(I));
steps = -(s-1)/2:(s-1)/2;
for k = 1:numel(Patches);
Patches{k} = A(I(k)+steps, ...
J(k)+steps, ...
:);
end
Approach using arrayfun: (slower than the loop)
steps = -(s-1)/2:(s-1)/2;
Patches = arrayfun(#(ii,jj) A(ii+steps,jj+steps,:), I, J, 'UniformOutput', false);
I am implementing a simple algorithm to do in-painting on a "damaged" image. I have a predefined mask that specifies the area which needs to be fixed. My strategy is to start at the border of the masked area and in-paint each pixel with the central mean of its neighboring non-zero pixels, repeating until there's no unknown pixels left.
function R = inPainting(I, mask)
H = [1 2 1; 2 0 2; 1 2 1];
R = I;
n = 1;
[row,col,~] = find(~mask); %Find zeros in mask (area to be inpainted)
unknown = horzcat(row, col)';
while size(unknown,2) > 0
new_unknown = [];
new_R = R;
for u = unknown
r = u(1);
c = u(2);
nb = R(max((r-n), 1):min((r+n), end), max((c-n),1):min((c+n),end));
nz = nb~=0;
nzs = sum(nz(:));
if nzs ~= 0 %We have non-zero neighbouring pixels. In-paint with average.
new_R(r,c) = sum(nb(:)) / nzs;
else
new_unknown = horzcat(new_unknown, u);
end
end
unknown = new_unknown;
R = new_R;
end
This works well, but it's not very efficient. Is it possible to vectorize such an approach, using mostly matrix operations? Does someone know of a more efficient way to implement this algorithm?
If I understand your problem statement, you are given a mask and you wish to fill in these pixels in this mask with the mean of the neighbourhood pixels that surround each pixel in the mask. Another constraint is that the image is defined such that any pixels that belong to the mask in the same spatial locations are zero in this mask. You are starting from the border of the mask and are propagating information towards the innards of the mask. Given this algorithm, there is unfortunately no way you can do this with standard filtering techniques as the current time step is dependent on the previous time step.
Image filtering mechanisms, like imfilter or conv2 can't work here because of this dependency.
As such, what I can do is help you speed up what is going on inside your loop and hopefully this will give you some speed up overall. I'm going to introduce you to a function called im2col. This is from the image processing toolbox, and given that you can use imfilter, we can use this function.
im2col creates a 2D matrix such that each column is a pixel neighbourhood unrolled into a single vector. How it works is that each pixel neighbourhood in column major order is grabbed, so we get a pixel neighbourhood at the top left corner of the image, then move down one row, and another row and we keep going until we reach the last row. We then move one column over and repeat the same process. For each pixel neighbourhood that we have, it gets unrolled into a single vector, and the output would be a MN x K matrix where you have a neighbourhood size of M x N for each pixel neighbourhood and there are K neighbourhoods.
Therefore, at each iteration of your loop, we can unroll the current inpainted image's pixel neighbourhoods into single vectors, determine which pixel neighborhoods are non-zero and from there, determine how many zero values there are for each of these selected pixel neighbourhood. After, we compute the mean for these non-zero columns disregarding the zero elements. Once we're done, we update the image and move to the next iteration.
What we're going to need to do first is pad the image with a 1 pixel border so that we're able to grab neighbourhoods that extend beyond the borders of the image. You can use padarray, also from the image processing toolbox.
Therefore, we can simply do this:
function R = inPainting(I, mask)
R = double(I); %// For precision
n = 1;
%// Change - column major indices
unknown = find(~mask); %Find zeros in mask (area to be inpainted)
%// Until we have searched all unknown pixels
while numel(unknown) ~= 0
new_R = R;
%// Change - take image at current iteration and
%// create columns of pixel neighbourhoods
padR = padarray(new_R, [n n], 'replicate');
cols = im2col(padR, [2*n+1 2*n+1], 'sliding');
%// Change - Access the right pixel neighbourhoods
%// denoted by unknown
nb = cols(:,unknown);
%// Get total sum of each neighbourhood
nbSum = sum(nb, 1);
%// Get total number of non-zero elements per pixel neighbourhood
nzs = sum(nb ~= 0, 1);
%// Replace the right pixels in the image with the mean
new_R(unknown(nzs ~= 0)) = nbSum(nzs ~= 0) ./ nzs(nzs ~= 0);
%// Find new unknown pixels to look at
unknown = unknown(nzs == 0);
%// Update image for next iteration
R = new_R;
end
%// Cast back to the right type
R = cast(R, class(I));
I have grey image divided into patches, I need to select pixels from each patches randomly and uniformly, so the method of selection should be the same for all patches..
the uniform pixel selection is critical in my project as I will need to find intensities difference between each two pixels.
here is the code that I try but it does not give the required result as only 8 pixels is selected where the patches size are [90 X 100]
I = imread('0001hv1.bmp');
Rpix = zeros(size(I));
[m n] = size(I);
for i = 2:m-1
for j = 2:n-1
switch randi(8,1,1)
case 1
rpix1 = I(i-1,j-1);
case 2
rpix2 = I(i-1,j);
case 3
rpix3 = I(i-1,j+1);
case 4
rpix4 = I(i,j-1);
case 5
rpix5 = I(i,j+1); %skip i,j as that is the pixel itself
case 6
rpix6 = I(i+1,j-1);
case 7
rpix7 = I(i+1,j);
case 8
rpix8 = I(i+1,j+1);
end
%rpix(i,j) = rpix ;
end
end
im_sub1 = rpix1 - rpix2;
im_sub2 = rpix3 - rpix4;
im_sub3 = rpix5 - rpix6;
im_sub4 = rpix7- rpix8;
I read about Gaussian distribution where the idea proposed is:" X and Y are randomly sampled using a Gaussian distribution where first X is sampled with a standard deviation of 0.04*S^2 and then the Yi’s are sampled using a Gaussian distribution – Each Yi is sampled with mean Xi and standard deviation of 0.01 * S^2."
is it suitable to my code case and how can I implement it ?
thank you
What I would do is fill a 1x8 matrix with random values, then use these as indices. The indices need then be mapped to the real x,y values. Be careful to use a one-dimensional index first. Because you have a hole at the center, uniformity would be a problem if you would use random x,y values.