I have been working on a project extracting different cortical depths from freesurfer constructions (ie 0%, 25%,50%,75%,100%), and then projecting the intensity values onto the surfaces. My end goal is to get the intensity values within an ROI of each depth layer.
So far I have been able to create equivolumetric surfaces at each depth using surface_tools, then I projected the intensity values from the original volumetric file onto the surface vertices using the mri_vol2surf command. The end result of this is an Nx1 .mgh file that has the intensity values of each vertex within the selected depth layer. I can open up this file in matlab using the load_mgh script, but I don't think I can open it with anything else.
My question is how to create an ROI mask within this layer because I don't need the whole pial surface, just part of the layer at each depth. I have tried to draw a label mask in Freesurfer with the original volume and the pial files at each depth (to use as guide lines) loaded. Then I tried to convert the label file for the ROI I drew to a volumetric file using mri_label2vol (this way I can use mri_mask).
The problem is that when I apply mri_mask to the original .mgh file generated by vol2surf it creates a matrix that is still 2D, and all the values are 1 (rather than their normal intensities, just isolated to an ROI rather than the whole layer). I'm wondering if I should convert the .mgh file from vol2surf back into voxel space before trying to mask it with the label file.
I was thinking about using surf2vol with each surface layer, the original volume, and the -mkmask flag. Supposedly this is meant to create a volume that is comprised of all the voxels that intersect a surface file. If I had something like this I could then create ROIs for each layer volume and mask the way I would normally mask an MRI volume.
Also a suggestion about a better way to do this in general would be greatly appreciated. I think I'm overcomplicating it.
I have attached pictures of the layer files when loaded into freesurfer, and the .mgh files generated by vol2surf after mapping the intensity values to the surface vertices.
Related
This question seems a little basic, but I would like to have some inputs about an efficient way of doing this.
Suppose I have the following image :
I also have a binary mask image as follows :
I detect MSER features on this image and plot the corresponding bounding ellipses.
What I need is that I want all those MSER regions removed, whose bounded ellipses overlap with the mask image. My issue is that I have a number of such operations and have to process a large number of images. Thus, what is the most efficient and fast way of doing this, which requirest minimal memory usage ?
It depends how your ellipses are stored, and perhaps on the size of your image. If they are represented as masks then I would be tempted to superimpose all the ellipses first and then do an intersection operation with the rectangle. Then you have a mask which you can apply to the original image.
If your ellipses are stored in a symbolic form - like the output of regionprops - it might be more efficient to test them against the rectangle first and only if they intersect would you convert it into a mask and add it to the overall mask.
I have a representation of 3D object as a a) cloud point and b) triangle mesh
My goal is to rotate this object and than obtain a surface which is visible from one specific view.
Then I would remove the points which are not visible from a given view. Does anyone know how to do this in MATLAB. What method is the fastest?
The file with the point cloud contains the coordinates of each point, and information about the color stored in three RGB channels.
First line:
`-35.4717 88.8637 -99.3782 97 78 46`
I will be grateful for any help.
One possible way would be to re-implement the pipeline of a graphic processor.
Transform your object and project all triangles into an image plane. In this image plane, the distances of each part of the triangle can be stored.
With that information you can check if a vertex is further away than the one you have painted into the image plane.
I am looking for a method that looks for shapes in 3D image in matlab. I don't have a real 3D sample image right now; in fact, my 3D image is actually a set of quantized 2D images.
The figure below is what I am trying to accomplish:
Although the example figure above is a 2D image, please understand that I am trying to do this in 3D. The input shape has these "tentacles", and I have to look for irregular shapes among them. The size of the tentacle from one point to another can change around but at "consistent and smooth" pace - that is it can be big at first, then gradually smaller later. But if suddenly, the shape just gets bigger not so gradually, like the red bottom right area in the figure above, then this is one of the volume of interests. Note that these shapes have more tendency to be rounded and spherical, but some of them are completely arbitrary and random.
I've tried the following methods so far:
Erode n times and dilate n times: given that the "tentacles" are always smaller than the volume of interest, this method will work as long as the volume is not too small. And, we need to have a mechanism to deal with thicker portion of the tentacle that becomes false positive somehow.
Hough Transform: although I have been suggested this method earlier (from Segmenting circle-like shapes out of Binary Image), I see that it works for some of the more rounded shape cases, but at the same time, more difficult cases such that of less-rounded, distorted, and/or arbitrary shapes can slip through this method.
Isosurface: because of my input is a set of 2D quantized images, using an isosurface allow me to reconstruct image in 3D and see things clearer. However, I'm not sure what could be done further in this case.
So can anyone suggests some other techniques for segmenting such shape out of these "tentacles"?
Every point on your image has the property that it is either part of the tentacle, or part of the volume of interest. If it is unknown apriori what the expected girth of the tentacle is, then 1 wont work because we won't be able to set n. However, we know that the n that erases the tentacle is smaller than the n that erases the node. You can for each point replace it with an integer representing the distance to the edge. Effectively, this can be done via successive single pixel erosion, and replacing each pixel with the count of the iteration at which it was erased. Lets call this the thickness at the pixel, but my rusty old mind tells me that there was a term of art for this.
Now we want to search for regions that have a higher-than-typical morphological distance from the boundary. I would do this by first skeletonizing the image (http://www.mathworks.com/help/toolbox/images/ref/bwmorph.html) and then searching for local maxima of the thickness along the skeleton. These are points on the skeleton where the thickness is larger than the neighbor points.
Finally I would sort the local maxima by the thickness, a threshold on which should help to separate the volumes of interest from the false positives.
I have an image which I would like to extract the GLCM texture in an area of interest(AOI). But AOI is a non-rectangular shape.
As an image is always stored as a matrix in Matlab, even if the AOI is an irregular polygonal area the neighboring pixels will also have to be used to make it a rectangular region. Since all the pixels outside the area of interest are made equal to zero, does this affect the features extracted from texture analysis.
Is it possible to do any kind of image analysis on non-rectangular regions?
Yes, if the pixels outside the area of interest were being used when computing the gray level cooccurrence matrix, then the result would be incorrect -- that is, would not suit your requirements, as border processing is a matter of choice.
Existing software systems offer this feature:
If you use matlab, according to http://www.mathworks.com/help/toolbox/images/ref/graycomatrix.html, you would need to assign to the pixels of the input image which are outside the AOI the value Nan.
In Mathematica, very conveniently the function ImageCooccurrence has an option named Masking which allows to pass any AOI as a binary mask. From http://reference.wolfram.com/mathematica/ref/ImageCooccurrence.html:
I am doing a project in Matlab on Image processing
Is there any possibility of getting 3d image from 2d image?
If you have multiple images of the same object and the position of the camera when the picure was taken, then it is possible, but still not easy. You can find two such datasets and links to relevant articles here: http://vision.middlebury.edu/mview/
a 3d image would be a projection from 4d (and to show one of those you've got to project down to 2d) and most images that can be displayed on computer or in a picture frame are 2d projections of 3d objects due to this projection which in fact selects a slice of the higher dimensional space it doesn't contain the information needed to invert that projection and get back to 3d from a 2d image
but if you have sufficient sampling of the space it is possible to reconstruct a 3d object from 2d images of it but i don't know of any simple ways to do this
You can't do this without supporting data such as multiple 2D images describing the same 3D object. You then need to figure out the perspectives from which each image was taken, reconcile those into real space, and generate your points using a method such as intersection of stereo lines through each image plane onto the same physical coordinate.
You can also attempt a superpixel approach by exploiting lighting data within a single image, though these methods aren't as accurate.
This is a big field.
The Radon transform is used in tomography applications to reconstruct 3D representations (i.e.images) from many 2D projections of the 3D "scene". This transform and its inverse are present in the image processing toolbox of Matlab. You might want to have a look at it.
Hope this helps.
A.