How to get region of interest in images - matlab

I need to train R-CNN on my dataset. Above Image is an example in which first column contain path to that image and second column contain coordinates of bounded box(ROI). How to get those coordinates in matlab. As my dataset is large so how those coordinates can be extracted by pointing manually.
for example if i am training R-CNN foe stop signs then second column contain coordinates of bounded box containing stop sign in whole image.

I do not know which version of MATLAB you are running, but I'm assuming it is fairly new (R2017a and later). Also, by 'how to get the coordinates', I assume you mean 'how to determine' or 'how to assign' the coordinates.
I believe what you need to do is to use one of the image labeling Apps called
imageLabeler
to annotate rectangles in your training images. You either do this manually if that's amenable, or you need to use automation algorithms if you already have a detector that does something similar. See this page for more details:
https://www.mathworks.com/help/vision/ug/create-and-import-an-automation-algorithm-for-ground-truth-labeling.html
Once you have the results of labeling stored in a groundTruth object, you would need to use something like objectDetectorTrainingData to create the table you are looking for.
See https://www.mathworks.com/help/vision/ug/train-an-object-detector-from-ground-truth-data.html for more details.

Related

Interpolation between two images with different pixelsize

For my application, I want to interpolate between two images(CT to PET).
Therefore I map between them like that:
[X,Y,Z] = ndgrid(linspace(1,size(imagedata_ct,1),size_pet(1)),...
linspace(1,size(imagedata_ct,2),size_pet(2)),...
linspace(1,size(imagedata_ct,3),size_pet(3)));
new_imageData_CT=interp3(imagedata_ct,X,Y,Z,'nearest',-1024);
The size of my new image new_imageData_CT is similar to PET image. The problem is that data of my new image is not correct scaled. So it is compressed. I think the reason for that is that the pixelsize between the two images is different and not involved to the interpolation. So for example :
CT image size : 512x512x1027
CT voxel size[mm] : 1.5x1.5x0.6
PET image size : 192x126x128
PET voxel size[mm] : 2.6x2.6x3.12
So how could I take care about the voxel size regarding to the interpolation?
You need to perform a matching in the patient coordinate system, but there is more to consider than just the resolution and the voxel size. You need to synchronize the positions (and maybe the orientations also, but this is unlikely) of the two volumes.
You may find this thread helpful to find out which DICOM Tags describe the volume and how to calculate transformation matrices to use for transforming between the patient (x, y, z in millimeters) and volume (x, y, z in column, row, slice number).
You have to make sure that the volume positions are comparable as the positions of the slices in the CT and PET do not necsesarily refer to the same origin. The easy way to do this is to compare the DICOM attribute Frame Of Reference UID (0020,0052) of the CT and PET slices. For all slices that share the same Frame Of Reference UID, the position of the slice in the DICOM header refers to the same origin.
If the datasets do not contain this tag, it is going to be much more difficult, unless you just put it as an assumption. There are methods to deduce the matching slices of two different volumes from the contents of the pixel data referred to as "registration" but this is a science of its own. See the link from Hugues Fontenelle.
BTW: In your example, you are not going to find a matching voxel in both volumes for each position as the volumes have different size. E.g. for the x-direction:
CT: 512 * 1.5 = 768 millimeters
PET: 192 * 2.6 = 499 millimeters
I'll let to someone else answering the question, but I think that you're asking the wrong one. I lack context of course, but at first glance Matlab isn't the right tool for the job.
Have a look at ITK (C++ library with python wrappers), and the "Multi-modal 3D image registration" article.
Try 3DSlicer (it has a GUI for the previous tool)
Try FreeSurfer (similar, focused on brain scans)
After you've done that registration step, you could export the resulting images (now of identical size and spacing), and continue with your interpolation in Matlab if you wish (or with the same tools).
There is a toolbox in slicer called PETCTFUSION which aligns the PET scan to the CT image.
you can install it in slicer new version.
In the module's Display panel shown below, options to select a colorizing scheme for the PET dataset are provided:
Grey will provide white to black colorization, with black indicating the highest count values.
Heat will provide a warm color scale, with Dark red lowest, and white the highest count values.
Spectrum will provide a warm color scale that goes cooler (dark blue) on the low-count end to white at the highest.
This panel also provides a means to adjust the window and level of both PET and CT volumes.
I normally use the resampleinplace tool after the registration. you can find it in the package: registration and then, resample image.
Look at the screensht here:
If you would like to know more about the PETCTFUSION, there is a link below:
https://www.slicer.org/wiki/Modules:PETCTFusion-Documentation-3.6
Since slicer is compatible with python, you can use the python interactor to run your own code too.
And let me know if you face any problem

Matching two distinct images with similar SURF features matlab

We are working upon gesture to text conversion using matlab and want the input gesture from user to be mapped with a predefined database. Using detectsurffeatures function, we extract the features, but proper matching is not taking place unless input is a part of the reference image
How to map the features the pairs like...
input hand and database
We are not getting any error, yet the images are not mapped.
Please suggest a solution.

find center points/ segment white blobs in binary images

In the shown image, I need to find the center points of the white blobs or I need to segment each white blob (to get an image which only contains that blob) from the background.
What is the efficient way to do it?
Seems this is what exactly you are looking for: Image Segmentation Tutorial ("BlobsDemo").
It contains demo to illustrate simple blob detection, measurement, and filtering. First it finds all the objects, then filters results to pick out objects of certain sizes. The basic concepts of thresholding, labeling, and regionprops are demonstrated with examples.
You need to use watershed algoritm for segmentation.
http://www.mathworks.com/help/images/ref/watershed.html
After segment cells use regionprops function.

How to manually segment and label ROIs in an image in Matlab?

I'm a newbie to Matlab. I'm basically attempting to manually segment a set of images and then manually label those segments also. I looked into the imfreehand(), but I'm unable to do this using imfreehand().
Basically, I want to follow the following steps :
Manually segment various ROIs on the image (imfreehand only lets me draw one segment I think?)
Assign labels to all those segments
Save the segments and corresponding labels to be used further (not sure what format they would be stored in, I think imfreehand would give me the position and I could store that along with the labels?)
Hopefully use these labelled segments in the images to form a training dataset for a neural network.
If there is some other tool or software which would help me do this, then any pointers would be very much appreciated. (Also I am new to stackoverflow, so if there is any way I could improve on the question to make it clearer, please let me know!) Thanks!
Derek Hoiem, a computer vision research at the University of Illinois, wrote an object labelling tool which does pretty much exactly what you asked for. You can download it from his page:
http://www.cs.illinois.edu/homes/dhoiem/software/index.html

Perl - Ratio of homogeneous areas of an image

I would like to check whether an image has a lot of homogeneous areas. Therefore I would like to get some kind of value of an image that declares a ratio for images depending on the amount/size of homogeneous areas (e.g. that value could have a range from 0 to 5).
Instead of a value there could be some kind of classification as well.
[many homogeneous areas -> value/class 5 ; few homogeneous areas -> value/class 0]
I would like to do that in perl. Is there a package/function or something like that?
What you want seems to be an area of image processing research which I am not familiar with. However, GraphicsMagick's mogrify utility has a -segment option:
Use -segment to segment an image by analyzing the histograms of the color components and identifying units that are homogeneous with the fuzzy c-means technique. The scale-space filter analyzes the histograms of the three color components of the image and identifies a set of classes. The extents of each class is used to coarsely segment the image with thresholding. The color associated with each class is determined by the mean color of all pixels within the extents of a particular class. Finally, any unclassified pixels are assigned to the closest class with the fuzzy c-means technique.
I don't know if this is any use to you. You might have to hit the library on this one, and read some research. You do have access to this through PerlMagick as well. However, it does not look like it gives access to the internals, but just produces an image based on parameters.
In my tests (without really understanding what the parameters do), photos turned entirely black, whereas PNG images with large areas of similar colors were reduced to a sort of an average color. Whether you can use that fact to develop a measure is an open question I am not going to investigate ;-)