Image Parameters (Standard Deviation, Mean and Entropy) of an RGB Image - matlab

I couldn't find an answer for RGB image.
How can someone get a value of SD,mean and Entropy of RGB image using MATLAB?
From http://airccse.org/journal/ijdms/papers/4612ijdms05.pdf TABLE3, it seems he got one answer so did he get the average of the RGB values?
Really in need of any help.

After reading the paper, because you are dealing with colour images, you have three channels of information to access. This means that you could alter one of the channels for a colour image and it could still affect the information it's trying to portray. The author wasn't very clear on how they were obtaining just a single value to represent the overall mean and standard deviation. Quite frankly, because this paper was published in a no-name journal, I'm not surprised how they managed to get away with it. If this was attempted to be published in more well known journals (IEEE, ACM, etc.), this would probably be rejected outright due to that very ambiguity.
On how I interpret this procedure, averaging all three channels doesn't make sense because you want to capture the differences over all channels. Doing this averaging will smear that information and those differences get lost. Practically speaking, if you averaged all three channels, should one channel change its intensity by 1, and when you averaged the channels together, the reported average would be so small that it probably would not register as a meaningful difference.
In my opinion, what you should perhaps do is treat the entire RGB image as a 1D signal, then perform the mean, standard deviation and entropy of that image. As such, given an RGB image stored in image_rgb, you can unroll the entire image into a 1D array like so:
image_1D = double(image_rgb(:));
The double casting is important because you want to maintain floating point precision when calculating the mean and standard deviation. The images will probably be of an unsigned integer type, and so this casting must be done to maintain floating point precision. If you don't do this, you may have calculations that get saturated or clamped beyond the limits of that data type and you won't get the right answer. As such, you can calculate the mean, standard deviation and entropy like so:
m = mean(image_1D);
s = std(image_1D);
e = entropy(image_1D);
entropy is a function in MATLAB that calculates the entropy of images so you should be fine here. As noted by #CitizenInsane in his answer, entropy unrolls a grayscale image into a 1D vector and applies the Shannon definition of entropy on this 1D vector. In a similar token, you can do the same thing with a RGB image, but we have already unrolled the signal into a 1D vector anyway, and so the input into entropy will certainly be well suited for the unrolled RGB image.

I have no idea how the author actually did it. But what you could do, is to treat the image as a 1D-array of size WxHx3 and then simply calculate the mean and standard deviation.

Don't know if table 3 is obtain in the same way but at least looking at entropy routine in image toolbox of matlab, RGB values are vectorized to single vector:
I = imread('rgb'); % Read RGB values
I = I(:); % Vectorization of RGB values
p = imhist(I); % Histogram
p(p == 0) = []; % remove zero entries in p
p = p ./ numel(I); % normalize p so that sum(p) is one.
E = -sum(p.*log2(p));

Related

Why iradon returns negative pixel values?

I took 200 projections at a step angle of 1.8 degrees using LabVIEW software. The size of the image is 2748 x 2748 pixels, uint16. Then using Matlab, I load the projection images, do the flat field correction, resize the image by 1/3 and save the images as .mat file. Then I run the code below for the filtered backprojection.
interp='linear'; %set interpolation: nearest, linear, spline, pchip, v5cubic
filter='Hann'; %set filter: Ram-Lak, Shepp-Logan, Cosine, Hamming, Hann, None
for s=1:916
for i=1:200
a(i,:)=proj065(:,s,i);
end
a=a';
%figure(3), imagesc(a)
b=iradon(a,1.8,interp,filter);
imagesc(b);
recon(:,:,s)=b;
s
clear a
end
If I used a filter in this code, I will get negative pixel values.
But, if I run the code without the filter, I will get positive pixel values.
Any idea why iradon returns negative pixel values in filtered back projection?
Thank you.
Nurul
Yes, the FBP (filtered back-projection) algorithm will do that. It can wrongly reconstruct voxels as having negative values, due to noise and discretization on the data. Nothing you can do about it than just crop those values generally.
As my PhD is about tomography reconstruction algorithms I feel contractually obligated (joking) to suggest the use of iterative algorithms to possibly obtain better images (never worse, often considerably better). Check SART/SIRT or CGLS for this problem.
However, you are calling your function wrong! In tomography, the step size is not enough to reconstruct an image, you generally need the exact angles, thus iradon doesnt accept a step size as an input, it accepts an array of angles.
in your case, theta should be theta=linspace(0,360-200/360,200), and you should call iradon(a,theta,...)

How to calculate image histogram without normalizing data in Matlab

I have an image I which pixel intensities fall within the range of 0-1. I can calculate the image histogram by normalizing it but I found the curves is not exactly the same as the histogram of raw data. This will cause some issue for the later peaks finding process(See attached two images).
My question is in Matlab, is there any way I can plot the image histogram without normalization the data so that I can keep the curve shape unchanged? This will benefit for those raw images when their pixel intensities are not within 0-1 ranges. Currently, I cannot calculate their histogram if I don't normalize the data.
The Matlab code for normalization and histogram calculation is attached. Any suggestion will be appreciated!
h = imhist(mat2gray(I));
Documentation of imhist tells us that the function checks the data type of the input and scale the values accordingly. Therefore, without testing with your attached data, this may work:
h = imhist(uint8(I));
An alternatively you may scale the integer-representation to floating-representation, by either using argument of mat2gray
h = imhist(mat2gray(I, [0,255]));
or just divide it.
h = imhist(I/255);
The imhist answer in this thread describing normalizing or casting is completely correctly. Alternatively, you could use the histogram function in MATLAB which will work with unnormalized floating point data:
A = 255*rand(500,500);
histogram(A);

adjust the gray level of an image according to its statistical distribution [duplicate]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Please explain as to what happens to an image when we use histeq function in MATLAB? A mathematical explanation would be really helpful.
Histogram equalization seeks to flatten your image histogram. Basically, it models the image as a probability density function (or in simpler terms, a histogram where you normalize each entry by the total number of pixels in the image) and tries to ensure that the probability for a pixel to take on a particular intensity is equiprobable (with equal probability).
The premise behind histogram equalization is for images that have poor contrast. Images that look like they're too dark, or if they're too washed out, or if they're too bright are good candidates for you to apply histogram equalization. If you plot the histogram, the spread of the pixels is limited to a very narrow range. By doing histogram equalization, the histogram will thus flatten and give you a better contrast image. The effect of this with the histogram is that it stretches the dynamic range of your histogram.
In terms of the mathematical definition, I won't bore you with the details and I would love to have some LaTeX to do it here, but it isn't supported. As such, I defer you to this link that explains it in more detail: http://www.math.uci.edu/icamp/courses/math77c/demos/hist_eq.pdf
However, the final equation that you get for performing histogram equalization is essentially a 1-to-1 mapping. For each pixel in your image, you extract its intensity, then run it through this function. It then gives you an output intensity to be placed in your output image.
Supposing that p_i is the probability that you would encounter a pixel with intensity i in your image (take the histogram bin count for pixel intensity i and divide by the total number of pixels in your image). Given that you have L intensities in your image, the output intensity at this location given the intensity of i is dictated as:
g_i = floor( (L-1) * sum_{n=0}^{i} p_i )
You add up all of the probabilities from pixel intensity 0, then 1, then 2, all the way up to intensity i. This is familiarly known as the Cumulative Distribution Function.
MATLAB essentially performs histogram equalization using this approach. However, if you want to implement this yourself, it's actually pretty simple. Assume that you have an input image im that is of an unsigned 8-bit integer type.
function [out] = hist_eq(im, L)
if (~exist(L, 'var'))
L = 256;
end
h = imhist(im) / numel(im);
cdf = cumsum(h);
out = (L-1)*cdf(double(im)+1);
out = uint8(out);
This function takes in an image that is assumed to be unsigned 8-bit integer. You can optionally specify the number of levels for the output. Usually, L = 256 for an 8-bit image and so if you omit the second parameter, L would be assumed as such. The first line computes the probabilities. The next line computes the Cumulative Distribution Function (CDF). The next two lines after compute input/output using histogram equalization, and then convert back to unsigned 8-bit integer. Note that the uint8 casting implicitly performs the floor operation for us. You'll need to take note that we have to add an offset of 1 when accessing the CDF. The reason why is because MATLAB starts indexing at 1, while the intensities in your image start at 0.
The MATLAB command histeq pretty much does the same thing, except that if you call histeq(im), it assumes that you have 32 intensities in your image. Therefore, you can override the histeq function by specifying an additional parameter that specifies how many intensity values are seen in the image just like what we did above. As such, you would do histeq(im, 256);. Calling this in MATLAB, and using the function I wrote above should give you identical results.
As a bit of an exercise, let's use an image that is part of the MATLAB distribution called pout.tif. Let's also show its histogram.
im = imread('pout.tif');
figure;
subplot(2,1,1);
imshow(im);
subplot(2,1,2);
imhist(im);
As you can see, the image has poor contrast because most of the intensity values fit in a narrow range. Histogram equalization will flatten the image and thus increase the contrast of the image. As such, try doing this:
out = histeq(im, 256); %//or you can use my function: out = hist_eq(im);
figure;
subplot(2,1,1);
imshow(out);
subplot(2,1,2);
imhist(out);
This is what we get:
As you can see the contrast is better. Darker pixels tend to move towards the darker end, while lighter pixels get pushed towards the lighter end. Successful result I think! Bear in mind that not all images will give you a good result when you try and do histogram equalization. Image processing is mostly a trial and error thing, and so you put a mishmash of different techniques together until you get a good result.
This should hopefully get you started. Good luck!

matlab code for perceptual hashing

I need a matlab code for a perceptual hashing algorithm descried here:
http://www.hackerfactor.com/blog/index.php?/archives/432-Looks-Like-It.html
Basically I want this to remove deatails in an image and only leave the major structure components information.
To do so, I think I need the following steps:
1. Reduce the DCT. Suppose the DCT is 32x32 (), just keep the top-left 8x8. Those represent the lowest frequencies in the picture.
Compute the average value. Like the Average Hash, compute the mean DCT value (using only the 8x8 DCT low-frequency values and excluding the first term since the DC coefficient can be significantly different from the other values and will throw off the average).
Further reduce the DCT. Set the 64 hash bits to 0 or 1 depending on whether each of the 64 DCT values is above or below the average value. The result doesn't tell us the actual low frequencies; it just tells us the very-rough relative scale of the frequencies to the mean. The result will not vary as long as the overall structure of the image remains the same; this can survive gamma and color histogram adjustments without a problem.
reconstruct image after the processing.
Anyone can help on any one of above steps?
I have tried some code that gives some results (in the below link), it is not yet perfect:
https://stackoverflow.com/questions/26748051/extract-low-frequency-from-dct-coeffecients-of-an-image-in-matlab
Try this:
% read image
I = imread('cameraman.tif');
% cosine transform and reduction
d = dct2(I);
d = d(1:8,1:8);
% compute average
a = mean(mean(d));
% set bits, here unclear whether > or >= shall be used
b = d > a;
% maybe convert to string:
string = num2str(b(:)');

image processing - enlarging an image using FFT (Matlab code)

I'm trying to write a simple matlab code which enlarges an image using fft. I tried the known algorithm for image expansion, which computes the Fourier transform of the image, pads it with zeros and computes the inverse Fourier of the padded image.
However, the inverse Fourier transform returns an image which contains complex numbers.
Therefore, when I'm trying to show the result using imshow, I'm getting the following error:
Warning: Displaying real part of complex input.
Do you have an idea what am I doing wrong?
my code:
im = imread('fruit.jpg');
imFFT = fft2(im);
bigger = padarray(imFFT,[10,10]);
imEnlarged = ifft2(bigger);
Thanks!
That's because the FFT returns values corresponding to the discrete (spatial) frequencies from 0 through Fs, where Fs is the (spatial) sampling rate. You need to insert zeros at high frequencies, which are located at the center of the returned FFT, not in its end.
You can use fftshift to shift the high frequencies to the end, pad with zeros, and then shift back with ifftshift (thanks to #Shai for the correction):
bigger = ifftshift(padarray(fftshift(imFFT),[10,10]));
Also, note that padding with zeros decreases the values in the enlarged image. You can correct that using a suitable amplification factor amp, which in this case would be equal to (1+2*10/length(im))^2:
bigger = ifftshift(padarray(fftshift(amp*imFFT),[10,10]));
You can pad at the higher frequencies directly (without fftshift suggested by Luis Mendo)
>> BIG = padarray( amp*imFFT, [20 20], 0, 'post' );
>> big = ifft2( BIG );
If you want a strictly real result, then before you do the IFFT you need to make sure the zero-padded array is exactly conjugate symmetric. Adding the zeros off-center could prevent this required symmetry.
Due to finite numerical precision, you may still end up with a complex IFFT result, but the imaginary components will all be tiny values that are essentially equivalent to zero.
Your FFT library may contain a half-to-real (quarter-size input for 2D) version that enforces symmetry and throws away the almost-zero numerical noise for you.