I would like to check whether an image has a lot of homogeneous areas. Therefore I would like to get some kind of value of an image that declares a ratio for images depending on the amount/size of homogeneous areas (e.g. that value could have a range from 0 to 5).
Instead of a value there could be some kind of classification as well.
[many homogeneous areas -> value/class 5 ; few homogeneous areas -> value/class 0]
I would like to do that in perl. Is there a package/function or something like that?
What you want seems to be an area of image processing research which I am not familiar with. However, GraphicsMagick's mogrify utility has a -segment option:
Use -segment to segment an image by analyzing the histograms of the color components and identifying units that are homogeneous with the fuzzy c-means technique. The scale-space filter analyzes the histograms of the three color components of the image and identifies a set of classes. The extents of each class is used to coarsely segment the image with thresholding. The color associated with each class is determined by the mean color of all pixels within the extents of a particular class. Finally, any unclassified pixels are assigned to the closest class with the fuzzy c-means technique.
I don't know if this is any use to you. You might have to hit the library on this one, and read some research. You do have access to this through PerlMagick as well. However, it does not look like it gives access to the internals, but just produces an image based on parameters.
In my tests (without really understanding what the parameters do), photos turned entirely black, whereas PNG images with large areas of similar colors were reduced to a sort of an average color. Whether you can use that fact to develop a measure is an open question I am not going to investigate ;-)
Related
I need to implement an image segmentation function in MATLAB based on the principles of the connected components algorithm, but with a few modifications. This is intended for very simple, 2D images, with a background color and some objects in different colors.
The idea is that, taking the image as a matrix, I provide a tool to select the background color (it will vary for every image). Then, when the value of the color of the background of the image is selected, I have to segment all the objects in the image, and the result should be a labeled matrix, of the same size of the image, with 0's for the background, and a different number for each object.
This is a graphic example of what I mean:
I understand the idea of how to do it, but I do not know how to implement it on MATLAB. For each pixel (matrix position) I should mark it as visited and then if the value corresponds to the one of the background, assign 0, if not, assign another value. The objects can be formed by different colors, so in the end, I need to segment groups of adjacent pixels, whatever their color is. Also I have to use 8-connectivity, in order to count the green object of the example image as only one object and not 4 different ones. And also, the objects should be counted from top to bottom, and from left to right.
Is there a simple way of doing this in MATLAB? I know the bwlabel function, but it works for binary images only, so I'd like to adapt it to my case.
once you know the background color, you can easily convert your image into a binary mask of the same size:
bw=img!=bg_color;
Once you have a binary mask you can call bwlavel with 8-connectivity argument as you suggested yourself.
Note: you might want to convert your color image from RGB representation to an indexed image using rgb2ind before processing.
For my application, I want to interpolate between two images(CT to PET).
Therefore I map between them like that:
[X,Y,Z] = ndgrid(linspace(1,size(imagedata_ct,1),size_pet(1)),...
linspace(1,size(imagedata_ct,2),size_pet(2)),...
linspace(1,size(imagedata_ct,3),size_pet(3)));
new_imageData_CT=interp3(imagedata_ct,X,Y,Z,'nearest',-1024);
The size of my new image new_imageData_CT is similar to PET image. The problem is that data of my new image is not correct scaled. So it is compressed. I think the reason for that is that the pixelsize between the two images is different and not involved to the interpolation. So for example :
CT image size : 512x512x1027
CT voxel size[mm] : 1.5x1.5x0.6
PET image size : 192x126x128
PET voxel size[mm] : 2.6x2.6x3.12
So how could I take care about the voxel size regarding to the interpolation?
You need to perform a matching in the patient coordinate system, but there is more to consider than just the resolution and the voxel size. You need to synchronize the positions (and maybe the orientations also, but this is unlikely) of the two volumes.
You may find this thread helpful to find out which DICOM Tags describe the volume and how to calculate transformation matrices to use for transforming between the patient (x, y, z in millimeters) and volume (x, y, z in column, row, slice number).
You have to make sure that the volume positions are comparable as the positions of the slices in the CT and PET do not necsesarily refer to the same origin. The easy way to do this is to compare the DICOM attribute Frame Of Reference UID (0020,0052) of the CT and PET slices. For all slices that share the same Frame Of Reference UID, the position of the slice in the DICOM header refers to the same origin.
If the datasets do not contain this tag, it is going to be much more difficult, unless you just put it as an assumption. There are methods to deduce the matching slices of two different volumes from the contents of the pixel data referred to as "registration" but this is a science of its own. See the link from Hugues Fontenelle.
BTW: In your example, you are not going to find a matching voxel in both volumes for each position as the volumes have different size. E.g. for the x-direction:
CT: 512 * 1.5 = 768 millimeters
PET: 192 * 2.6 = 499 millimeters
I'll let to someone else answering the question, but I think that you're asking the wrong one. I lack context of course, but at first glance Matlab isn't the right tool for the job.
Have a look at ITK (C++ library with python wrappers), and the "Multi-modal 3D image registration" article.
Try 3DSlicer (it has a GUI for the previous tool)
Try FreeSurfer (similar, focused on brain scans)
After you've done that registration step, you could export the resulting images (now of identical size and spacing), and continue with your interpolation in Matlab if you wish (or with the same tools).
There is a toolbox in slicer called PETCTFUSION which aligns the PET scan to the CT image.
you can install it in slicer new version.
In the module's Display panel shown below, options to select a colorizing scheme for the PET dataset are provided:
Grey will provide white to black colorization, with black indicating the highest count values.
Heat will provide a warm color scale, with Dark red lowest, and white the highest count values.
Spectrum will provide a warm color scale that goes cooler (dark blue) on the low-count end to white at the highest.
This panel also provides a means to adjust the window and level of both PET and CT volumes.
I normally use the resampleinplace tool after the registration. you can find it in the package: registration and then, resample image.
Look at the screensht here:
If you would like to know more about the PETCTFUSION, there is a link below:
https://www.slicer.org/wiki/Modules:PETCTFusion-Documentation-3.6
Since slicer is compatible with python, you can use the python interactor to run your own code too.
And let me know if you face any problem
I saw many Q&A here about squeezing space out of Matlab figures. However I want to squeeze space resulted from a possibly fixed aspect, i.e. to choose proper paper size for figure printing when aspect is fixed.
Quite often I work with DEM/map/image thus I use axis image. Now if I want to produce a high resolution image I do something like
set(gcf,'PaperUnits','inches','PaperPosition',[0 0 4 3])
print('-dpng','-r300','somefile.png')
as described in Matlab KB.
The problem here is to determine a proper aspect such that I can specify proper paper size that would leave no white/background stripes on either sides.
Apparently if I have a map (let's say 1000x2000 cells) with aspect ratio of 0.5, and I'm printing it on 4"x3" paper, I'll get background stripes on the sides. This is quite annoying as I'd prefer 1.5"x3" paper + axes & labels or so. Right now I have to manually adjust paper size.
This is inconvenient as I'd like a universal solution. For instance I may print a plot into file that I expect to occupy 4"x3" as well that has no fixed aspect. Or I may want to print a 3D figure. I'm aware of daspect and pbaspect, but how can I know how it is currently drawn?
Perhaps I can derive current 2D aspect from get(gca,'Position') and then scale it to my maximum allowed desired size (e.g., 4"x3") while respecting whether DataAspectRatioMode (?) property is set to manual. Is it the way to proceed or is there a better way?
I am not exactly sure if I understand your problem exactly, but I have used the following commands to create pdf images that are sized exactly to the size of the figure. I have used this for both 2D and 3D figures. The "handle" variable is simply your figure handle.
set(handle,'Units','inches');
set(handle,'PaperUnits','Inches','PaperPositionMode','auto');
P = get(handle,'Position');
set(handle,'PaperSize', [P(3),P(4)]);
I'm trying to lay out images in a grid, with a few featured ones being 4x as big.
I'm sure it's a well known layout algorithm, but i don't know what it is called.
The effect I'm looking for is similar to the screenshot shown below. Can anyone point me in the right direction?
UPDATED
To be more specific, lets limit it to the case of there being only the two sizes shown in the example. There can be an infinite number of items, with a set margin between them. Hope that clarifies things.
There is a well-known layout algorithm called treemapping, which is perhaps a bit too generic for your specific problem with some images being 4x as big, but could still be applicable particularly if you decide you want to have arbitrary sizes.
There are several different rectangular treemap algorithms, any of which could be used to visualise photos. Here is a nice example, which uses the strip algorithm to lay out photos with each size proportional to the rating of the photo.
This problem can be solved with a heatmap or a treemap. Heatmaps often use space-filling-curves. A heatmap reduces the 2d complexity to a 1d complexity. A heatmap looks like a quadtree. You want to look for Nick's hilbert curve quadtree spatial index blog.
I am trying to implement a histogram equalization method (HE) for a UIImage in my iphone app.
I read the following:
http://en.wikipedia.org/wiki/Histogram_equalization
But it says:
Still, it should be noted that applying the same method on the Red, Green, and Blue components of an RGB image may yield dramatic changes in the image's color balance since the relative distributions of the color channels change as a result of applying the algorithm. However, if the image is first converted to another color space, Lab color space, or HSL/HSV color space in particular, then the algorithm can be applied to the luminance or value channel without resulting in changes to the hue and saturation of the image.
So would this be a feasible approach?
Grab UIImage data and convert from RGB to HSL
Apply HE on luminance channel
convert data back to RGB
Create new UIImage from data
Will this be slow, I wonder? Also, will I have to deal with 8/16/24 bit data differently, as I have no idea what kind of image will be used with my app? Or can I assume 24 bit for images in the iPhone?
I would appreciate any pointers to objective-C code that does color corrected histogram equalization.
I have looked at the library below, but it does not do any color correction for HE:
http://code.google.com/p/simple-iphone-image-processing/source/browse/#svn/trunk/Classes%3Fstate%3Dclosed
Thanks!
Yes you can do it this way, that will work. Yes it will "cost more" since you have to do the conversion back and forth - but that's the price you will have to pay if you don't want to affect the hue and saturation. Is that worth it for the images you're correcting? It would depend on your application, are you OK with a hit in performance vs best quality? You will likely only have to deal with 8bit color components, you can assume "24 bit" for images but that's 3 x 8bit components The only way to know your answers though is to try.
I recommend using YUV Colorspace. Both for accuracy and for computation simplicity (Linear Combination).
One method would be applying the histogram equalization on the RGB image (Image2).
Then let the user to chose what he wants, Apply only on luminosity or all 3 channels.
For the first choice take UV channels of the original image with the Y channel of the equalized image and convert back to RGB.
For the second choice just leave the user with Image2.
Since after transformation, you deal with I/V as being continuous values, you will have to apply some binning strategy, which results in a step Histogram for the quantity you wish to equalize. Therefore, you might speed this up by reducing the bin size?
Just write the codes and model applying HE to each of the RGB component. Although there are much calculation for its 3 components, but programming speed is OK. In most of the cases, the contrast is improved, but the "look" of the image is changed. So agree to transform the RGB into another space then apply the HE again. I am looking for the formula and also the correct color space for the HE. Which color space is easier?
I write the HE in the iPad platform, but I find after opening a big image taken from my Canon, the whole program crashes after UIPopoverContoller, UIImagePickerController functions. I think it maybe due to I am pushing too much on the phone's OS, or the OS allocates only a limit amount of memory space for each of the apps. If apps is using more than pre-set memory, then the iOS just kills the apps right away. So must take care of the size of the input image, and the garbage collection of unused memory, and memory leak. Using XCode's instrument tool to check for leakage is a must.