in matlab, find 3D neighbourhood - matlab

I have a volume (3D matrix) that has undergone a segmentation process. Most of the volume consist of NaNs (or zeros), except regions that have passed some criteria (see picture). I need to know how large each remaining segment is in number of voxels and how is their distribution on the 2D planes (xy, xz, yz). Is there anything in matlab that can help me do this in an efficient way rather than direct search? The volume can be rather large. For ex. in the attached picture there is one segment in yellowish/brownish colour of 7 voxels and extends more vertically than in xy.
Thanks in advance.

The most convenient solution is to use REGIONPROPS. In your example:
stats = regionprops(image, 'area', 'centroid')
For every feature, there is an entry in the structure stats with the area (i.e. # of voxels) and the centroid.

I think that what you are looking for is called bwlabeln. It allows you to find blobs in 3D space, just like bwlabel does in 2D. Afterwards, you can use regionprops to find out the properties of the data.
Taken directly from help:
bwlabeln Label connected components in binary image.
L = bwlabeln(BW) returns a label matrix, L, containing labels for the
connected components in BW. BW can have any dimension; L is the same
size as BW. The elements of L are integer values greater than or equal
to 0. The pixels labeled 0 are the background. The pixels labeled 1
make up one object, the pixels labeled 2 make up a second object, and
so on. The default connectivity is 8 for two dimensions, 26 for three
dimensions, and CONNDEF(NDIMS(BW),'maximal') for higher dimensions.

Related

Deleting objects based on area calculation

I have a image with small objects. I have calculated their areas using
area=regionprops(CC,'Area');
CC is the connected components returned by
CC=bwconncomp(BW);
Now i need to remove objects with area less than 15 (set them to zero in the original image BW).
I know i can do this with a more simpler ways other than that from regionprops, but I need to do this from the output of regionprops, because I will extract other properties that is supported by regionprops and filter again the image according to these extracted features in a similar way. Can anyone help me with this task?
After the two commands you show,
CC=bwconncomp(BW);
area=regionprops(CC,'Area');
area is a struct array where area(ii).Area is the area for object ii. This corresponds to the connected component given by CC.PixelIdxList{ii}.
You can find the indices with a small area by
I = find([area.Area] < 15);
Then,
CC.PixelIdxList{I}
gives a comma-separated list of vectors with pixel indices. You can join these vectors into a single vector using cat:
pixels = cat(1,CC.PixelIdxList{I});
Now all that is left is setting those pixels to 0 in the input image:
BW(pixels) = 0;

matlab: limiting erosion on binary images

I am trying to erode objects in a binary image such that they do not become smaller than some fixed size. Consider, for instance, a binary map composed of connected components (blobs), wherein one defines blob size by either the minimal or maximal antipolar (anti-perimetric) distance (i.e., the distance between two points that are as far from one another as they can be on the perimeter or contour of the blob; if the contour consists of N consecutively numbered points, then the distances evaluated would be those between points 1 and N/2+1, points 2 and N/2+2, etc.). Given such an arrangement, I seek to erode these blobs until the distance metric reaches a specified limit. If the blobs were simple circles, then the effect could be realized by ultimate erosion followed by dilation to a fixed size; however, the contour of an irregular object would be lost by such a procedure. Is there a way to achieve such an effect for connected, irregular components using built-in functions in MATLAB?
With no image and already tried code presented I can understand you wrong, but may be iterative using bwmorph with 'thin','skel' or 'shrink' will help you.
while(cond < cond_threshold)
bw=bwmorph(bw,...,1); %one of the options above
cond = calc_cond(bw);
end

Which kind of filtering is used in SPCImage for binning?

I was wondering if anyone knows which kind of filter is applied by SPCImage from the Becker & Hickl system.
I am taking some FLIM data with my system and I want to create the lifetime images. For doing so I want to bin my images in the same way as it does SPCImage, so I can increase my SN ratio. The binning goes like 1x1, 3x3, 5x5, etc. I have created the function for doing a 3x3 binning, but each time it gets more complicated...
I want to do it in MATLAB, and maybe there is already a function that can help me with this.
Many thanks for your help.
This question is old, but for anyone else wondering: You want to sum the pixels in an (2M+1) x (2M+1) neighborhood for each plane (M integer). So I am pretty sure you can go about the problem by treating it like a convolution.
#This is your original 3D SDT image
#I assume that you have ordered the image with spatial dimensions along the
#first and second and the time channels are the third dimension.
img = ... #<- your 3D image goes here
#This describes your filter. M=1 means take 1 a one pixel rect around your
#center pixel and add the values to your center, etc... (i.e. M=1 equals a
#total of 3x3 pixels accumulated)
M=2
#this is the (2D) filter for your convolution
filtr = ones(2*M+1, 2*M+1);
#the resulting binned image (3D)
img_binned = convn(img, filtr, 'same');
You should definitely check the result against your calculation, but it should do the trick.
I think that you need to test/investigate image filter functions to apply to this king of images Fluorescence-lifetime imaging microscopy.
A median filter as showing here is good for smoothering things. Or a weihgted moving average filter where applied to the image erase de bright spots and only are maintained the broad features
So you need to review of the digital image processing in matlab

Get the middle/center pixel of a segmentation

I have a segmented image. I wish to extract the middle pixel(s) of each segmentation. The goal is to extract the mean color from the middle pixel.
The following diagram illustrates what I mean by 'middle pixel':
The alternative middle pixels are also acceptable.
What algorithms/functions are available in Matlab to achieve something similar to this? Thanks.
If I'm understanding what you want correctly, you're looking for the centroid. MATLAB has the regionprops function which measures the properties of separate binary objects as long as the objects.
You can use the Centroid property. Assuming your image is stored in im and is binary, something like this will do:
out = regionprops(im, 'Centroid');
The output will be a structure array of N elements where N corresponds to the total number of objects found in the image. To access the ith object's centroid, simply do:
cen = out(i).Centroid;
If you wish to collect all centroids and place them perhaps in a N x 2 numeric array, something like this would work:
out = reshape([out.Centroid], 2, []).';
Each row would be the centroid of an object found in the image. Take note that an object is considered to be a blob of white pixels that are connected to each other.

Dsift (vl_feat) and Matlab

I have a (naïve probably) question, I just wanted to clarify this part. So, when I take a dsift on one image I generally get an 128xn matrix. Thing is, that n value, is not always the same across different images. Say image 1 gets an 128x10 matrix, while image 2 gets a 128x18 matrix. I am not quite sure why is this happening.
I think that each column of 128dimension represents a single image feature or a single patch detected from the image. So in the case of 128x18, we have extracted 18 patches and described them with 128 values each. If this is correct, why cant we have a fixed numbers of patches per image, say 20 patches, so every time our matrixes would be 128x20.
Cheers!
This is because the number of reliable features that are detected per image change. Just because you detect 10 features in one image does not mean that you will be able to detect the same number of features in the other image. What does matter is how close one feature from one image matches with another.
What you can do (if you like) is extract the, say, 10 most reliable features that are matched the best between the two images, if you want to have something constant. Choose a number that is less than or equal to the minimum of the number of patches detected between the two. For example, supposing you detect 50 features in one image, and 35 features in another image. After, when you try and match the features together, this results in... say... 20 best matched points. You can choose the best 10, or 15, or even all of the points (20) and proceed from there.
I'm going to show you some example code to illustrate my point above, but bear in mind that I will be using vl_sift and not vl_dsift. The reason why is because I want to show you visual results with minimal pre- and post-processing. Should you choose to use vl_dsift, you'll need to do a bit of work before and after you compute the features by dsift if you want to visualize the same results. If you want to see the code to do that, you can check out the vl_dsift help page here: http://www.vlfeat.org/matlab/vl_dsift.html. Either way, the idea about choosing the most reliable features applies to both sift and dsift.
For example, supposing that Ia and Ib are uint8 grayscale images of the same object or scene. You can first detect features via SIFT, then match the keypoints.
[fa, da] = vl_sift(im2single(Ia));
[fb, db] = vl_sift(im2single(Ib));
[matches, scores] = vl_ubcmatch(da, db);
matches contains a 2 x N matrix, where the first row and second row of each column denotes which feature index in the first image (first row) matched best with the second image (second row).
Once you do this, sort the scores in ascending order. Lower scores mean better matches as the default matching method between two features is the Euclidean / L2 norm. As such:
numBestPoints = 10;
[~,indices] = sort(scores);
%// Get the numBestPoints best matched features
bestMatches = matches(:,indices(1:numBestPoints));
This should then return the 10 best matches between the two images. FWIW, your understanding about how the features are represented in vl_feat is spot on. These are stored in da and db. Each column represents a descriptor of a particular patch in the image, and it is a histogram of 128 entries, so there are 128 rows per feature.
Now, as an added bonus, if you want to display how each feature from one image matches to another image, you can do the following:
%// Spawn a new figure and show the two images side by side
figure;
imagesc(cat(2, Ia, Ib));
%// Extract the (x,y) co-ordinates of each best matched feature
xa = fa(1,bestMatches(1,:));
%// CAUTION - Note that we offset the x co-ordinates of the
%// second image by the width of the first image, as the second
%// image is now beside the first image.
xb = fb(1,bestMatches(2,:)) + size(Ia,2);
ya = fa(2,bestMatches(1,:));
yb = fb(2,bestMatches(2,:));
%// Draw lines between each feature
hold on;
h = line([xa; xb], [ya; yb]);
set(h,'linewidth', 1, 'color', 'b');
%// Use VL_FEAT method to show the actual features
%// themselves on top of the lines
vl_plotframe(fa(:,bestMatches(1,:)));
fb2 = fb; %// Make a copy so we don't mutate the original
fb2(1,:) = fb2(1,:) + size(Ia,2); %// Remember to offset like we did before
vl_plotframe(fb2(:,bestMatches(2,:)));
axis image off; %// Take out the axes for better display