Individual fractal dimention - image-segmentation

I know it could be seen as a very basic question, but, how can I calculate the fractal dimention for individual shapes in a figure?.
That is a idealised image basically genrated with cv2.fillPoly(image_blank, pts = np.int32([pol_vert]), color = (0,0,0)):
That is a kind of real one (segmented image from a SEM sample):

Related

How to fill the holes in 3D image reconstructed from binary images of CT slides in Matlab

I have around 200 CT slides from patients and need to reconstruct them and segment out the eye orbit. In Matlab, I read the original DICOM image, use imbinarize(I,T) to change them to binary image, and then stack them up to form the 3D orbital image (see attached images below).
Original DICOMenter image description here
Binarized Imageenter image description here
Reconstructed Orbit in 3D
However, as you can see, there are many holes inside the orbit. To cope with this, I use bwmorph() with 'thicken' and 'bridge', then use imclose() with the 'disk' parameter:
for i = 30 : 160
info = dicominfo(dirOutput(i).name,'UseDictionaryVR',true);
imgTemp = dicomread(info);
imgCropped = imcrop(imgTemp,bboxCrop);
imgCroppedBin = imbinarize(imgCropped,0.51728);
img_thick = bwmorph(imgCroppedBin,'thicken');
img_bridge = bwmorph(img_thick,'bridge',Inf);
se = strel('disk',1);
I(:,:,i) = imclose(img_bridge,se);
end;
I(:,:,i) is the resulting 3D array. Then I use fv = isosurface(I,ISOVALUE) and patch() to reconstruct the 3D array to a 3D image (as attached above).
The resulting 3D image still has many holes unfilled. Increasing the STREL element size or decreasing the ISOVALUE will make the final model thick and coarse (e.g. with stair case inside).
May I ask how can I fill the resulting holes in the 3D model and still preserve the original smoothness of the image? Should I do this in the 3D image or do this in 2D binary image before the reconstruction?
I tried alphaShape as well to fill the holes, the result is more satisfactory and easier to control than isosurface() and patch(). But with large "alpha value" e.g. 8, the output model becomes to lose detail, such as the "inferior orbital fissure" being smoothed too.
Reconstructed image using alphaShape with alpha value = 8
Grateful if anyone can suggest a solution. Thanks!

Redundancy of natural Images in image processing

I am trying to figuring out about correlation between two images that shifted by pixel each of other and measure the correlation between these images. I have images woods and rooster image like this rooster image and woods image
and then I doing some code in matlab like this
Im_Rooster = imread('rooster.jpg'); //read image file
Im_Woods = imread('woods.png');
Im_DRooster = im2double(rgb2gray(Im_Rooster)); //convert to gray image and double data type
Im_DWoods = im2double(Im_Woods);
for i = 0:1:30
Img_Rooster_shift = circshift(Im_DRooster,i,2); // shift image by 1 pixel
Img_Woods_shift = circshift(Im_DRooster,i,2);
Rooster_correlation_val(1,i+1) = corr2(Im_DRooster,Img_Rooster_shift ); // calculate correlation coefficient between original image and shifted image
Woods_correlation_val(1,i+1) = corr2(Im_DWoods,Img_Woods_shift );
end
x = 0:1:30;
figure(1),plot(x,Rooster_correlation_val,x,Woods_correlation_val) // plot the result graph
legend('rooster','woods')
and then I have plotted graph like this plot graph result
Can somebody explain the meaning of this graph result?
What is correlation coefficient connection between natural images?
The result imply that in natural images the probability that neighborhood pixel will share same color information is higher than the probability that it will share same light intensity/value. This is naturally explained by the fact that color in nature are result of light-matter interaction, and matter type is not distributed randomly, but usually we will find a collection of same matter in same place, while shadow depends on surrounding space so it can beahve more randomly.
When you circshift the image you actually compare neighbor pixels, this is auto-correlation but in space. When you ignore the color information than you stay with value/intensity value that can easily have higher frequencies because now we enter to light-shadow realm, that is depend on light angles and matter shape and neighbor shapes and location.

Which kind of filtering is used in SPCImage for binning?

I was wondering if anyone knows which kind of filter is applied by SPCImage from the Becker & Hickl system.
I am taking some FLIM data with my system and I want to create the lifetime images. For doing so I want to bin my images in the same way as it does SPCImage, so I can increase my SN ratio. The binning goes like 1x1, 3x3, 5x5, etc. I have created the function for doing a 3x3 binning, but each time it gets more complicated...
I want to do it in MATLAB, and maybe there is already a function that can help me with this.
Many thanks for your help.
This question is old, but for anyone else wondering: You want to sum the pixels in an (2M+1) x (2M+1) neighborhood for each plane (M integer). So I am pretty sure you can go about the problem by treating it like a convolution.
#This is your original 3D SDT image
#I assume that you have ordered the image with spatial dimensions along the
#first and second and the time channels are the third dimension.
img = ... #<- your 3D image goes here
#This describes your filter. M=1 means take 1 a one pixel rect around your
#center pixel and add the values to your center, etc... (i.e. M=1 equals a
#total of 3x3 pixels accumulated)
M=2
#this is the (2D) filter for your convolution
filtr = ones(2*M+1, 2*M+1);
#the resulting binned image (3D)
img_binned = convn(img, filtr, 'same');
You should definitely check the result against your calculation, but it should do the trick.
I think that you need to test/investigate image filter functions to apply to this king of images Fluorescence-lifetime imaging microscopy.
A median filter as showing here is good for smoothering things. Or a weihgted moving average filter where applied to the image erase de bright spots and only are maintained the broad features
So you need to review of the digital image processing in matlab

Image similarity by Euclidean distance in hsv color space in MATLAB

The code included below calculates the Euclidean distance between two images in hsv color space and if the result is under a Threshold (here set to 0.5) the two images are similar and it will group them in one cluster.
This will be done for a group of images (video frames actually).
It worked well on a group of sample images but when I change the sample it starts to work odd, e.g the result is low for two different images and high (like 1.2) for two similar images.
For example the result for these two very similar images is relatively high: first pic and second pic when it actually should be under 0.5.
What is wrong?
In the code below, f is divided by 100 to allow comparison to values near 0.5.
Im1 = imread('1.jpeg');
Im2 = imread('2.jpeg');
hsv = rgb2hsv(Im1);
hn1 = hsv(:,:,1);
hn1=hist(hn1,16);
hn1=norm(hn1);
hsv = rgb2hsv(Im2);
hn2 = hsv(:,:,1);
hn2=hist(hn2,16);
hn2=norm(hn2);
f = norm(hn1-hn2,1)
f=f/100
These two lines:
hn1=hist(hn1,16);
hn1=norm(hn1);
convert your 2D image into a scalar. I suspect that is not what you're interested in doing.....
EDIT:
Probably a better approach would be:
hn1 = hist(hn1(:),16) / numel(hn1(:));
but, you haven't really given us much on the math, so this is just a guess.

How to remove camera noises in CMOS camera

Here with i have attached two consecutive frames captured by a cmos camera with IR Filter.The object checker board was stationary at the time of capturing images.But the difference between two images are nearly 31000 pixels.This could be affect my result.can u tell me What kind of noise is this?How can i remove it.please suggest me any algorithms or any function possible to remove those noises.
Thank you.Sorry for my poor English.
Image1 : [1]: http://i45.tinypic.com/2wptqxl.jpg
Image2: [2]: http://i45.tinypic.com/v8knjn.jpg
That noise appears to result from camera sensor (Bayer to RGB conversion). There's the checkerboard pattern still left.
Also lossy jpg contributes a lot to the process. You should first have an access to raw images.
From those particular images I'd first try to use edge detection filters (Sobel Horizontal and Vertical) to make a mask that selects between some median/local histogram equalization for the flat areas and to apply some checker board reducing filter to the edges. The point is that probably no single filter is able to do good for both jpeg ringing artifacts and to the jagged edges. Then the real question is: what other kind of images should be processed?
From the comments: if corner points are to be made exact, then the solution more likely is to search for features (corner points with subpixel resolution) and make a mapping from one set of points to the other images set of corners, and search for the best affine transformation matrix that converts these sets to each other. With this matrix one can then perform resampling of the other image.
One can fortunately estimate motion vectors with subpixel resolution without brute force searching all possible subpixel locations: when calculating a matched filter, one gets local maximums for potential candidates of exact matches. But this is not all there is. One can try to calculate a more precise approximation of the peak location by studying the matched filter outputs in the nearby pixels. For exact match the output should be symmetric. Otherwise the 'energies' of the matched filter are biased towards the second best location. (A 2nd degree polynomial fit + finding maximum can work.)
Looking closely at these images, I must agree with #Aki Suihkonen.
In my view, the main noise comes from the jpeg compression, that causes sharp edges to "ring". I'd try a "de-speckle" type of filter on the images, and see if this makes a difference. Some info that can help you implement this can be found in this link.
In a more quick and dirty fashion, you apply one of the many standard tools, for example, given the images are a and b:
(i) just smooth the image with a Gaussian filter, this can reduce noise differences between the images by an order of magnitude. For example:
h=fspecial('gaussian',15,2);
a=conv2(a,h,'same');
b=conv2(b,h,'same');
(ii) Reduce Noise By Adaptive Filtering
a = wiener2(a,[5 5]);
b = wiener2(b,[5 5]);
(iii) Adjust ntensity Values Using Histogram Equalization
a = histeq(a);
b = histeq(b);
(iv) Adjust Intensity Values to a Specified Range
a = imadjust(a,[0 0.2],[0.5 1]);
b = imadjust(b,[0 0.2],[0.5 1]);
If your images are supposed to be black and white but you have captured them in gray scale there could be difference due to noise.
You can convert the images to black and white by defining a threshold, any pixel with a value less than that threshold should be assigned 0 and anything larger than that threshold should be assigned 1, or whatever your gray scale range is (maybe 255).
Assume your image is I, to make it black and white assuming your gray scale image level is from 0 to 255, assume you choose a threshold of 100:
ind = find(I < 100);
I(ind) = 0;
ind = find(I >= 100);
I(ind) = 255;
Now you have a black and white image, do the same thing for the other image and you should get very small difference if the camera and the subject have note moved.