SimpleITK OrientedBoundingBoxVertices - in what coordinate system are the vertices defined? - simpleitk

Using the LabelShapeStatisticFilter, I can extract oriented regions of interest from the original image correctly. I want to plot those oriented bounding boxes over the original image.
When I try to view the output of the GetOrientedBoundingBoxVertices() method, it's not clear to me in what coordinate system these vertices are defined. They do not appear to be in the original image coordinate system.
I am confident I am using the LabelShapeStatisticFilter class as intended (see below), following this excellent notebook: http://insightsoftwareconsortium.github.io/SimpleITK-Notebooks/Python_html/35_Segmentation_Shape_Analysis.html
bacteria_labels = shape_stats.GetLabels()
bacteria_volumes = [shape_stats.GetPhysicalSize(label) for label in bacteria_labels]
num_images = 5 # number of bacteria images we want to display
bacteria_labels_volume_sorted = [label for _,label in sorted(zip(bacteria_volumes, bacteria_labels))]
resampler = sitk.ResampleImageFilter()
aligned_image_spacing = [10,10,10] #in nanometers
for label in bacteria_labels_volume_sorted[0:num_images]:
aligned_image_size = [ int(ceil(shape_stats.GetOrientedBoundingBoxSize(label)[i]/aligned_image_spacing[i])) for i in range(3) ]
direction_mat = shape_stats.GetOrientedBoundingBoxDirection(label)
aligned_image_direction = [direction_mat[0], direction_mat[3], direction_mat[6],
direction_mat[1], direction_mat[4], direction_mat[7],
direction_mat[2], direction_mat[5], direction_mat[8] ]
resampler.SetOutputDirection(aligned_image_direction)
resampler.SetOutputOrigin(shape_stats.GetOrientedBoundingBoxOrigin(label))
resampler.SetOutputSpacing(aligned_image_spacing)
resampler.SetSize(aligned_image_size)
obb_img = resampler.Execute(img)
# Change the image axes order so that we have a nice display.
obb_img = sitk.PermuteAxes(obb_img,[2,1,0])
gui.MultiImageDisplay(image_list = [obb_img],
title_list = ["OBB_{0}".format(label)])
I expect to be able to draw these bounding boxes over the original image, but I'm not sure how.
UPDATE
Perhaps this can illustrate what I mean better. Resampled Oriented Bounding Box, output as expected:
However, after using original_label_image.TransformPhysicalPointToContinousIndex(), the oriented bounding box points in the original image space appear incorrect (shape_stats.OrientedBoundingBoxVertices() in original index space):
UPDATE 2
Using shape_stats.GetCentroid(), I can correctly get real-world coordinates of the centroids of each label and plot them correctly:
It also appears that output of shape_stats.GetOrientedBoundingBoxOrigin() is plausibly in real-world coordinates. One element of shape_stats.OrientedBoundingBoxVertices() corresponds to shape_stats.GetOrientedBoundingBoxOrigin().

The vertices are defined in physical spaces and not index space. You may need to use the Image class’s TransformPhysicslPointToIndex.

I believe I have figured it out: the oriented bounding box vertices are neither entirely in original image coordinates, or in the coordinates of the bounding box.
The origin of the oriented bounding box, returned by shape_stats.GetOrientedBoundingBoxOrigin(), is in original image world-coordinates. This origin also corresponds to one vertex of the oriented bounding box.
Each vertex of the oriented bounding box, returned by shape_stats.OrientedBoundingBoxVertices(), can be recovered in the real-world coordinates by a rotation about the origin using shape_stats.GetOrientedBoundingBoxDirection().
I don't know if this representation of vertices was intentional, but it was confusing to me at first (though I am a relative newcomer to sitk).

Related

How do I Convert an OpenLayers Polygon to a Circle?

I have a drawing feature where, as in one case, a person can draw a circle using the methodology in OL docs example. When that's saved, the server needed to be have it converted to a polygon, and I was able to do that using fromCircle.
Now, I'm needing to make the circle modifiable after it's been converted and saved. But I don't see a clear cut way to get a Circle geometry out of the Polygon tools provided in the library. There is a Polygon.circular, but that doesn't sound like what I want.
I'm guessing the only way to do this is to grab the center, and one of the segment points, and figure out the radius manually?
As long as fromCircle used sides set to a multiple of 4 and rotation zero (which are the default) center and radius can be easily obtained to convert back to a circle:
center = ol.extent.getCenter(polygon.getExtent());
radius = ol.extent.getWidth(polygon.getExtent())/2;

Detect the position,orientation and color in Matlab of not overlapped Tiles to be picked by robot

I am currently working on a project where I need to find the
square shape tiles in pile which are not overlapped,
am currently working on a project where
I need to determine the orientation , position (center ) ,and color
of each square tile . These orientation and positions
will be used as input for a robot to be picked
and the robot will sort them in a specific locations .
I am using Matlab and i should transfer the data using TCP/IP.
I've experimenting with edge detection(canny,sobel) ,
found the boundaries,segmentation using threshold and FCM but
I haven't found a reliable way to determine the tiles which are
not overlapped ,i am trying to use template shape matching but
i don't know how to do that . This needs to be done in real time
as i will be using frame which is taken from a USB camera that
attached to PC . I was wondering if someone could offer a
reliable solution ? Here is a sample image.
I was wondering if someone could offer
a reliable solution to determine the square shape tiles
which are not overlapped? Here is a sample imageoverlapped Tiles
You've separated the image into tiles and background. So now simply label all the connected components. Take each one and test for single tile-ness. If you know the approximate size of the tiles, first exclude by area. Then calculate the centroid and the extreme left, right, top and bottom. If it is tile, the intersection of top bottom and left-right will be approximately in the centroid, and the half angles will be perpendicular to the tile edge. So rotate, take the bounding box, and count unset pixels, which should be almost zero for a rectangluar tile.
(You'll probably need to do a morphological operation or two to clean up the images if the tile / background separation is a bit dicey).
Check out the binary image processing library http://malcolmmclean.github.io/binaryimagelibrary 1
thanks for your quick replay.i already did some morphological operation and found connected component and below is my code in matlab ,and each tile has 2.5*2.5 cm area
a = imread('origenal image.jpg');
I = rgb2gray(a);
imshow(I)
thresold = graythresh(I);
se1=strel('diamond',2);
I1=imerode(I,se1);
figure(1)
imshow(I1);
bw = imclose(I1 , ones(25) );
imshow(bw)
CC = bwconncomp(bw);
L = labelmatrix(CC);

Classify static/moving objects on a set of images in MATLAB

I have to implement a basic tracking program in MATLAB that, given a set of frames from a videogame, it analyzes each one of them and then creates a bounding box around each object. I've used the function regionprops in order to obtain the coordinates of the bounding boxes for each object, and visualized them using the function rectangle, as follows:
for i = 1:size( frames,2 )
CC{1,i} = findConnectedComponents( frames{1,i} );
stats{1,i} = regionprops( 'struct',CC{1,i},'BoundingBox','Centroid' );
imshow( frames{1,i} ),hold on
for j = 1:size(stats{1,i},1)
r = rectangle( 'Position',stats{1,i}(j).BoundingBox );
r.FaceColor = [0 0.5 0.5 0.45];
end
end
This works just fine, but I'd like to go one step further and be able to differenciate static objects from moving objects. I thought of using the centroid to see, for each object, if it is different in each frame (which would mean that the object is moving), but in each image I have a different number of objects.
For example, if I am trying this on Space Invaders, when you kill an alien it disappears, so the number of objects is reduced. Also each projectile is a separate object and there could be a different number of projectiles in different moments of the game.
So my question is, how could I classify the objects based on wether they move or not, and paint them with two different colors?
In the case of consistent background, using optical flow is ideal for you.
The basic idea is pretty simple, consider subtracting two consecutive frames, and use this to get flow vector of the objects that moved between frames.
You can look at Lucas–Kanade method
and Horn–Schunck method.
Here is a link for matlab implementation of the same.

Region of Interest in nighttime vehicle detection

I am developing a project of detecting vehicles' headlights in night scene. I am working on a demo on MATLAB. My problem is that I need to find region of interest (ROI) to get low computing requirement. I have researched in many papers and they just use a fixed ROI like this one, the upper part is ignored and the bottom is used to analysed later.
However, if the camera is not stable, I think this approach is inappropriate. I want to find a more flexible one, which alternates in each frame. My experiments images are shown here:
If anyone has any idea, plz give me some suggestions.
I would turn the problem around and say that we are looking for headlights
ABOVE a certain line rather than saying that the headlights are below a certain line i.e. the horizon,
Your images have a very high reflection onto the tarmac and we can use that to our advantage. We know that the maximum amount of light in the image is somewhere around the reflection and headlights. We therefore look for the row with the maximum light and use that as our floor. Then look for headlights above this floor.
The idea here is that we look at the profile of the intensities on a row-by-row basis and finding the row with the maximum value.
This will only work with dark images (i.e. night) and where the reflection of the headlights onto the tarmac is large.
It will NOT work with images taking in daylight.
I have written this in Python and OpenCV but I'm sure you can translate it to a language of your choice.
import matplotlib.pylab as pl
import cv2
# Load the image
im = cv2.imread('headlights_at_night2.jpg')
# Convert to grey.
grey_image = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
Smooth the image heavily to mask out any local peaks or valleys
We are trying to smooth the headlights and the reflection so that there will be a nice peak. Ideally, the headlights and the reflection would merge into one area
grey_image = cv2.blur(grey_image, (15,15))
Sum the intensities row-by-row
intensity_profile = []
for r in range(0, grey_image.shape[0]):
intensity_profile.append(pl.sum(grey_image[r,:]))
Smooth the profile and convert it to a numpy array for easy handling of the data
window = 10
weights = pl.repeat(1.0, window)/window
profile = pl.convolve(pl.asarray(intensity_profile), weights, 'same')
Find the maximum value of the profile. That represents the y coordinate of the headlights and the reflection area. The heat map on the left show you the distribution. The right graph shows you the total intensity value per row.
We can clearly see that the sum of the intensities has a peak.The y-coordinate is 371 and indicated by a red dot in the heat map and a red dashed line in the graph.
max_value = profile.max()
max_value_location = pl.where(profile==max_value)[0]
horizon = max_value_location
The blue curve in the right-most figure represents the variable profile
The row where we find the maximum value is our floor. We then know that the headlights are above that line. We also know that most of the upper part of the image will be that of the sky and therefore dark.
I display the result below.
I know that the line in both images are on almost the same coordinates but I think that is just a coincidence.
You may try downsampling the image.

Identify Lobed and bumps of Leaves

I need some help, I have to make a project about leaves.
I want to make it by MATLAB.
my input is an image of one leaf (with a white background) and I need to know two things about the leaf:
1) find the lobed leaf (the pixels of each lobed leaf):
Lay the leaf on a table or work space where you can examine it.
Look at the leaf you are trying to identify. If the leaf looks like it has fingers, these are considered lobes. There can be
anywhere from two to many lobes on a leaf.
Distinguish pinnate leaves from palmate leaves by looking at the veins on the underside of the leaf. If the veins all come from
the same place at the base of the leaf it is considered palmately
lobed. If they are formed at various places on the leaf from one
centre line, the leaf is pinnately lobed.
Identify the type of leaf by using a leaf dictionary.
2) find approximately the number of bumps of the leaf:
in other words, find the "swollen points" of each leaf.
these are examples of leaves:
I've found some leaves examples in here.
Here is my attempt to solve the problem.
In the images that I've found, the background is completely black. If it is not so in your images, you should use Otsu's thresholding method.
I assumed that there can be only 3 types of leaves, according to your image:
The idea is to do blob analysis. I use the morphological operation of opening, to separate the leaves. If there is only one blob after the opening, I assume it is not compound. If the leaves are not compound, I analyze the solidity of the blobs. Non-solid enough means they are lobed.
Here are some examples:
function IdentifyLeaf(dirName,fileName)
figure();
im = imread(fullfile(dirName,fileName));
subplot(1,3,1); imshow(im);
% thresh = graythresh( im(:,:,2));
imBw = im(:,:,2) > 0;
subplot(1,3,2);imshow(imBw);
radiusOfStrel = round( size(im,1)/20 ) ;
imBwOpened = imopen(imBw,strel('disk',radiusOfStrel));
subplot(1,3,3);imshow(imBwOpened);
rpOpened = regionprops(imBwOpened,'Area');
if numel(rpOpened)>1
title('Pinnately Compound');
else
rp = regionprops(imBw,'Area','Solidity');
%Leave only largest blob
area = [rp.Area];
[~,maxIndex] = max(area);
rp = rp(maxIndex);
if rp.Solidity < 0.9
title('Pinnately Lobed');
else
title('Pinnately Veined');
end
end
end
I would approach this problem by converting it from 2d to 1d by scanning in a vector the perimeter of the leaf using "right hand on the wall" -algorithm.
From that data, I presume, one can find a dominant axis of symmetry (e.g. fitting a line); the distance of the perimeter would be calculated from that axis and then one could simply use a threshold+filtering to find local maxima and minima to reveal the number lobes/fingers... The histogram of distance could differentiate between pinnately lobed and pinnately compound leaves.
Another single metrics to check the curvature of the perimeter (from two extreme points) would be http://en.wikipedia.org/wiki/Sinuosity
Recognizing veins is unfortunately a complete different topic.