image processing - enlarging an image using FFT (Matlab code) - matlab

I'm trying to write a simple matlab code which enlarges an image using fft. I tried the known algorithm for image expansion, which computes the Fourier transform of the image, pads it with zeros and computes the inverse Fourier of the padded image.
However, the inverse Fourier transform returns an image which contains complex numbers.
Therefore, when I'm trying to show the result using imshow, I'm getting the following error:
Warning: Displaying real part of complex input.
Do you have an idea what am I doing wrong?
my code:
im = imread('fruit.jpg');
imFFT = fft2(im);
bigger = padarray(imFFT,[10,10]);
imEnlarged = ifft2(bigger);
Thanks!

That's because the FFT returns values corresponding to the discrete (spatial) frequencies from 0 through Fs, where Fs is the (spatial) sampling rate. You need to insert zeros at high frequencies, which are located at the center of the returned FFT, not in its end.
You can use fftshift to shift the high frequencies to the end, pad with zeros, and then shift back with ifftshift (thanks to #Shai for the correction):
bigger = ifftshift(padarray(fftshift(imFFT),[10,10]));
Also, note that padding with zeros decreases the values in the enlarged image. You can correct that using a suitable amplification factor amp, which in this case would be equal to (1+2*10/length(im))^2:
bigger = ifftshift(padarray(fftshift(amp*imFFT),[10,10]));

You can pad at the higher frequencies directly (without fftshift suggested by Luis Mendo)
>> BIG = padarray( amp*imFFT, [20 20], 0, 'post' );
>> big = ifft2( BIG );

If you want a strictly real result, then before you do the IFFT you need to make sure the zero-padded array is exactly conjugate symmetric. Adding the zeros off-center could prevent this required symmetry.
Due to finite numerical precision, you may still end up with a complex IFFT result, but the imaginary components will all be tiny values that are essentially equivalent to zero.
Your FFT library may contain a half-to-real (quarter-size input for 2D) version that enforces symmetry and throws away the almost-zero numerical noise for you.

Related

How to calculate image histogram without normalizing data in Matlab

I have an image I which pixel intensities fall within the range of 0-1. I can calculate the image histogram by normalizing it but I found the curves is not exactly the same as the histogram of raw data. This will cause some issue for the later peaks finding process(See attached two images).
My question is in Matlab, is there any way I can plot the image histogram without normalization the data so that I can keep the curve shape unchanged? This will benefit for those raw images when their pixel intensities are not within 0-1 ranges. Currently, I cannot calculate their histogram if I don't normalize the data.
The Matlab code for normalization and histogram calculation is attached. Any suggestion will be appreciated!
h = imhist(mat2gray(I));
Documentation of imhist tells us that the function checks the data type of the input and scale the values accordingly. Therefore, without testing with your attached data, this may work:
h = imhist(uint8(I));
An alternatively you may scale the integer-representation to floating-representation, by either using argument of mat2gray
h = imhist(mat2gray(I, [0,255]));
or just divide it.
h = imhist(I/255);
The imhist answer in this thread describing normalizing or casting is completely correctly. Alternatively, you could use the histogram function in MATLAB which will work with unnormalized floating point data:
A = 255*rand(500,500);
histogram(A);

Why do I obtain a skewed spectrum from the FFT? (Matlab)

I try to find the strongest frequency component with Matlab. It works, but if the datapoints and periods are not nicely aligned, I need to zero-pad my data to increase the FFT resolution. So far so good.
The problem is that, when I zero-pad too much, the frequency with the maximal power changes, even if everything is aligned nicely and I would expect a clear result.
This is my MWE:
Tmax = 1024;
resolution = 1024;
period = 512;
X = linspace(0,Tmax,resolution);
Y = sin(2*pi*X/period);
% N_fft = 2^12; % still fine, max_period is 512
N_fft = 2^13; % now max_period is 546.1333
F = fft(Y,N_fft);
power = abs(F(1:N_fft/2)).^2;
dt = Tmax/resolution;
freq = (0:N_fft/2-1)/N_fft/dt;
[~, ind] = max(power);
max_period = 1/freq(ind)
With zero-padding up to 2^12 everything works fine, but when I zero-pad to 2^13, I get a wrong result. It seems like too much zero-padding shifts the spectrum, but I doubt it. I rather expect a bug in my code, but I cannot find it. What am I doing wrong?
EDIT: It seems like the spectrum is skewed towards the low frequencies. Zero-padding just makes this visible:
Why is my spectrum skewed? Shouldn't it be symmetric?
Here is a graphic explanation of what you're doing wrong (which is mostly a resolution problem).
EDIT: this shows the power for each fft data point, mapped to the indices of the 2^14 dataset. That is, the indices for the 2^13 data numbered 1,2,3 map to 1,3,5 on this graph; the indices for 2^12 data numbered 1,2,3 map to 1,5,9; and so on.
.
You can see that the "true" value should in fact not be 512 -- your indexing is off by 1 or a fraction of 1.
Its not a bug in your code. It has to do with the properties of the DFT (and thus the FFT, which is merely a fast version of the DFT).
When you zero-pad, you add frequency resolution, particularly on the lower end.
Here you use a sine wave as test, so you are basically convolving a finite length sine with finite sines and cosines (see here https://en.wikipedia.org/wiki/Fast_Fourier_transform details), which have almost the same or lower frequency.
If you were doing a "proper" fft, i.e. doing integrals from -inf to +inf, even those low frequency components would give you zero coefficients for the FFT, but since you are doing finite sums, the result of those convolutions is not zero and hence the actual computed fourier transform is inaccurate.
TL;DR: Use a better window function!
The long version:
After searching further, I finally found the explanation. Neither is indexing the problem, nor the additional low frequency components added by the zero-padding. The frequency response of the rectangular window, combined with the negative frequency components is the culprit. I found out on this website explaining window functions.
I made more plots to explain:
Top: The frequency response without windowing: two delta peaks, one at the positive and one at the negative frequency. I always plotted the positive part, since I didn't expect to need the negative frequency components. Middle: The frequency response of the rectangular window function. It is relatively broad, but I didn't care, because I thought I'd have only a single peak. Bottom: The frequency response of the zero-padded signal. In time domain, this is the multiplication of window function and sine-wave. In frequency domain, this amounts to the convolution of the frequency response of the window function with the frequency response of the perfect sine. Since there are two peaks, the relatively broad frequency responses of the window overlap significantly, leading to a skewed spectrum and therefore a shifted peak.
The solution: A way to circumvent this is to use a proper window function, like a Hamming window, to have a much smaller frequency response of the window, leading to less overlap.

Image Parameters (Standard Deviation, Mean and Entropy) of an RGB Image

I couldn't find an answer for RGB image.
How can someone get a value of SD,mean and Entropy of RGB image using MATLAB?
From http://airccse.org/journal/ijdms/papers/4612ijdms05.pdf TABLE3, it seems he got one answer so did he get the average of the RGB values?
Really in need of any help.
After reading the paper, because you are dealing with colour images, you have three channels of information to access. This means that you could alter one of the channels for a colour image and it could still affect the information it's trying to portray. The author wasn't very clear on how they were obtaining just a single value to represent the overall mean and standard deviation. Quite frankly, because this paper was published in a no-name journal, I'm not surprised how they managed to get away with it. If this was attempted to be published in more well known journals (IEEE, ACM, etc.), this would probably be rejected outright due to that very ambiguity.
On how I interpret this procedure, averaging all three channels doesn't make sense because you want to capture the differences over all channels. Doing this averaging will smear that information and those differences get lost. Practically speaking, if you averaged all three channels, should one channel change its intensity by 1, and when you averaged the channels together, the reported average would be so small that it probably would not register as a meaningful difference.
In my opinion, what you should perhaps do is treat the entire RGB image as a 1D signal, then perform the mean, standard deviation and entropy of that image. As such, given an RGB image stored in image_rgb, you can unroll the entire image into a 1D array like so:
image_1D = double(image_rgb(:));
The double casting is important because you want to maintain floating point precision when calculating the mean and standard deviation. The images will probably be of an unsigned integer type, and so this casting must be done to maintain floating point precision. If you don't do this, you may have calculations that get saturated or clamped beyond the limits of that data type and you won't get the right answer. As such, you can calculate the mean, standard deviation and entropy like so:
m = mean(image_1D);
s = std(image_1D);
e = entropy(image_1D);
entropy is a function in MATLAB that calculates the entropy of images so you should be fine here. As noted by #CitizenInsane in his answer, entropy unrolls a grayscale image into a 1D vector and applies the Shannon definition of entropy on this 1D vector. In a similar token, you can do the same thing with a RGB image, but we have already unrolled the signal into a 1D vector anyway, and so the input into entropy will certainly be well suited for the unrolled RGB image.
I have no idea how the author actually did it. But what you could do, is to treat the image as a 1D-array of size WxHx3 and then simply calculate the mean and standard deviation.
Don't know if table 3 is obtain in the same way but at least looking at entropy routine in image toolbox of matlab, RGB values are vectorized to single vector:
I = imread('rgb'); % Read RGB values
I = I(:); % Vectorization of RGB values
p = imhist(I); % Histogram
p(p == 0) = []; % remove zero entries in p
p = p ./ numel(I); % normalize p so that sum(p) is one.
E = -sum(p.*log2(p));

matlab code for perceptual hashing

I need a matlab code for a perceptual hashing algorithm descried here:
http://www.hackerfactor.com/blog/index.php?/archives/432-Looks-Like-It.html
Basically I want this to remove deatails in an image and only leave the major structure components information.
To do so, I think I need the following steps:
1. Reduce the DCT. Suppose the DCT is 32x32 (), just keep the top-left 8x8. Those represent the lowest frequencies in the picture.
Compute the average value. Like the Average Hash, compute the mean DCT value (using only the 8x8 DCT low-frequency values and excluding the first term since the DC coefficient can be significantly different from the other values and will throw off the average).
Further reduce the DCT. Set the 64 hash bits to 0 or 1 depending on whether each of the 64 DCT values is above or below the average value. The result doesn't tell us the actual low frequencies; it just tells us the very-rough relative scale of the frequencies to the mean. The result will not vary as long as the overall structure of the image remains the same; this can survive gamma and color histogram adjustments without a problem.
reconstruct image after the processing.
Anyone can help on any one of above steps?
I have tried some code that gives some results (in the below link), it is not yet perfect:
https://stackoverflow.com/questions/26748051/extract-low-frequency-from-dct-coeffecients-of-an-image-in-matlab
Try this:
% read image
I = imread('cameraman.tif');
% cosine transform and reduction
d = dct2(I);
d = d(1:8,1:8);
% compute average
a = mean(mean(d));
% set bits, here unclear whether > or >= shall be used
b = d > a;
% maybe convert to string:
string = num2str(b(:)');

Fit sine wave with a distorted time-base

I want to know the best way to fit a sine-wave with a distorted time base, in Matlab.
The distortion in time is given by a n-th order polynomial (n~10), of the form t_distort = P(t).
For example, consider the distortion t_distort = 8 + 12t + 6t^2 + t^3 (which is just the power series expansion of (t-2)^3).
This will distort a sine-wave as follows:
I want to be able to find the distortion given this distorted sine-wave. (i.e. I want to find the function t = G(t_distort), but t_distort = P(t) is unknown.)
If your resolution is high enough, then this is basically an angle-demodulation problem. The standard way to demodulate an angle-modulated signal is to take the derivative, followed by an envelope detector, followed by an integrator.
Since I don't know the exact numbers you're using, I'll show an example with my own numbers. Suppose my original timebase has 10 million points from 0 to 100:
t = 0:0.00001:100;
I then get the distorted timebase and calculate the distorted sine wave:
td = 0.02*(t+2).^3;
yd = sin(td);
Now I can demodulate it. Take the "derivative" using approximate differences divided by the step size from before:
ydot = diff(yd)/0.00001;
The envelope can be easily detected:
envelope = abs(hilbert(ydot));
This gives an approximation for the derivative of P(t). The last step is an integrator, which I can approximate using a cumulative sum (we have to scale it again by the step size):
tdguess = cumsum(envelope)*0.00001;
This gives a curve that's almost identical to the original distorted timebase (so, it gives a good approximation of P(t)):
You won't be able to get the constant term of the polynomial since we made our approximation from its derivative, which of course eliminates the constant term. You wouldn't even be able to find a unique constant term mathematically from just yd, since infinitely many values will yield the same distorted sine wave. You can get the other three coefficients of P(t) using polyfit if you know the degree of P(t) (ignore the last number, it's the constant term):
>> polyfit(t(1:10000000), tdguess, 3)
ans =
0.0200 0.1201 0.2358 0.4915
This is pretty close to the original, aside from the constant term: 0.02*(t+2)^3 = 0.02t^3 + 0.12t^2 + 0.24t + 0.16.
You wanted the inverse mapping Q(t). Can you do that knowing a close approximation for P(t) as found so far?
Here's an analytical driven route that takes asin of the signal with proper unwrapping of the angle. Then you can fit a polynomial using polyfit on the angle or using other fit methods (search for fit and see). Last, take a sin of the fitted function and compare the signal to the fitted one... see this pedagogical example:
% generate data
t=linspace(0,10,1e2);
x=0.02*(t+2).^3;
y=sin(x);
% take asin^2 to obtain points of "discontinuity" where then asin hits +-1
da=(asin(y).^2);
[val locs]=findpeaks(da); % this can be done in many other ways too...
% construct the asin according to the proper phase unwrapping
an=NaN(size(y));
an(1:locs(1)-1)=asin(y(1:locs(1)-1));
for n=2:numel(locs)
an(locs(n-1)+1:locs(n)-1)=(n-1)*pi+(-1)^(n-1)*asin(y(locs(n-1)+1:locs(n)-1));
end
an(locs(n)+1:end)=n*pi+(-1)^(n)*asin(y(locs(n)+1:end));
r=~isnan(an);
p=polyfit(t(r),an(r),3);
figure;
subplot(2,1,1); plot(t,y,'.',t,sin(polyval(p,t)),'r-');
subplot(2,1,2); plot(t,x,'.',t,(polyval(p,t)),'r-');
title(['mean error ' num2str(mean(abs(x-polyval(p,t))))]);
p =
0.0200 0.1200 0.2400 0.1600
The reason I preallocate with NaNand avoid taking the asin at points of discontinuity (locs) is to reduce the error of the fit later. As you can see, for a 100 points between 0,10 the average error is of the order of floating point accuracy, and the polynomial coefficients are as exact as you can have them.
The fact that you are not taking a derivative (as in the very elegant Hilbert transform) allows to be numerically exact. For the same conditions the Hilbert transform solution will have a much bigger average error (order of unity vs 1e-15).
The only limitation of this method is that you need enough points in the regime where the asin flips direction and that function inside the sin is well behaved. If there's a sampling issue you can truncate the data and only maintain a smaller range closer to zero, such that it'll be enough to characterize the function inside the sin. After all, you don't need millions op points to fit to a 3 parameter function.