I'm trying to verify an FFT algorithm I should use for a project VS the same thing on Matlab.
The point is that with my own C FFT function I always get the right (the second one) part of the double sided FFT spectrum evaluated in Matlab and not the first one as "expected".
For instance if my third bin is in the form a+i*b the third bin of Matlab's FFT is a-i*b. A and b values are the same but i always get the complex conjugate of Matlab's.
I know that in terms of amplitudes and power there's no trouble (cause abs value) but I wonder if in terms of phases I'm going to read always wrong angles.
Im not so skilled in Matlab to know (and I have not found useful infos on the web) if Matlab FFT maybe returns the FFT spectre with negative frequencies first and then positive... or if I have to fix my FFT algorithm... or if it is all ok because phases are the unchanged regardless wich part of FFT we choose as single side spectrum (but i doubt about this last option).
Example:
If S is the sample array with N=512 samples, Y = fft(S) in Matlab return the FFT as (the sign of the imaginary part in the first half of the array are random, just to show the complex conjugate difference for the second part):
1 A1 + i*B1 (DC, B1 is always zero)
2 A2 + i*B2
3 A3 - i*B3
4 A4 + i*B4
5 A5 + i*B5
...
253 A253 - i*B253
254 A254 + i*B254
255 A255 + i*B255
256 A256 + i*B256
257 A257 - i*B257 (Nyquyst, B257 is always zero)
258 A256 - i*B256
259 A255 - i*B255
260 A254 - i*B254
261 A253 + i*B253
...
509 A5 - i*B5
510 A4 - i*B4
511 A3 + i*B3
512 A2 - i*B2
My FFT implementation returns only 256 values (and that's ok) in the the Y array as:
1 1 A1 + i*B1 (A1 is the DC, B1 is Nyquist, both are pure Real numbers)
2 512 A2 - i*B2
3 511 A3 + i*B3
4 510 A4 - i*B4
5 509 A5 + i*B5
...
253 261 A253 + i*B253
254 260 A254 - i*B254
255 259 A255 - i*B255
256 258 A256 - i*B256
Where the first column is the proper index of my Y array and the second is just the reference of the relative row in the Matlab FFT implementation.
As you can see my FFT implementation (DC apart) returns the FFT like the second half of the Matlab's FFT (in reverse order).
To summarize: even if I use fftshift as suggested, it seems that my implementation always return what in the Matlab FFT should be considered the negative part of the spectrum.
Where is the error???
This is the code I use:
Note 1: the FFT array is not declared here and it is changed inside the function. Initially it holds the N samples (real values) and at the end it contains the N/2 +1 bins of the single sided FFT spectrum.
Note 2: the N/2+1 bins are stored in N/2 elements only because the DC component is always real (and it is stored in FFT[0]) and also the Nyquyst (and it is stored in FFT[1]), this exception apart all the other even elements K holds a real number and the oven elements K+1 holds the imaginary part.
void Fft::FastFourierTransform( bool inverseFft ) {
double twr, twi, twpr, twpi, twtemp, ttheta;
int i, i1, i2, i3, i4, c1, c2;
double h1r, h1i, h2r, h2i, wrs, wis;
int nn, ii, jj, n, mmax, m, j, istep, isign;
double wtemp, wr, wpr, wpi, wi;
double theta, tempr, tempi;
// NS is the number of samples and it must be a power of two
if( NS == 1 )
return;
if( !inverseFft ) {
ttheta = 2.0 * PI / NS;
c1 = 0.5;
c2 = -0.5;
}
else {
ttheta = 2.0 * PI / NS;
c1 = 0.5;
c2 = 0.5;
ttheta = -ttheta;
twpr = -2.0 * Pow( Sin( 0.5 * ttheta ), 2 );
twpi = Sin(ttheta);
twr = 1.0+twpr;
twi = twpi;
for( i = 2; i <= NS/4+1; i++ ) {
i1 = i+i-2;
i2 = i1+1;
i3 = NS+1-i2;
i4 = i3+1;
wrs = twr;
wis = twi;
h1r = c1*(FFT[i1]+FFT[i3]);
h1i = c1*(FFT[i2]-FFT[i4]);
h2r = -c2*(FFT[i2]+FFT[i4]);
h2i = c2*(FFT[i1]-FFT[i3]);
FFT[i1] = h1r+wrs*h2r-wis*h2i;
FFT[i2] = h1i+wrs*h2i+wis*h2r;
FFT[i3] = h1r-wrs*h2r+wis*h2i;
FFT[i4] = -h1i+wrs*h2i+wis*h2r;
twtemp = twr;
twr = twr*twpr-twi*twpi+twr;
twi = twi*twpr+twtemp*twpi+twi;
}
h1r = FFT[0];
FFT[0] = c1*(h1r+FFT[1]);
FFT[1] = c1*(h1r-FFT[1]);
}
if( inverseFft )
isign = -1;
else
isign = 1;
n = NS;
nn = NS/2;
j = 1;
for(ii = 1; ii <= nn; ii++) {
i = 2*ii-1;
if( j>i ) {
tempr = FFT[j-1];
tempi = FFT[j];
FFT[j-1] = FFT[i-1];
FFT[j] = FFT[i];
FFT[i-1] = tempr;
FFT[i] = tempi;
}
m = n/2;
while( m>=2 && j>m ) {
j = j-m;
m = m/2;
}
j = j+m;
}
mmax = 2;
while(n>mmax) {
istep = 2*mmax;
theta = 2.0 * PI /(isign*mmax);
wpr = -2.0 * Pow( Sin( 0.5 * theta ), 2 );
wpi = Sin(theta);
wr = 1.0;
wi = 0.0;
for(ii = 1; ii <= mmax/2; ii++) {
m = 2*ii-1;
for(jj = 0; jj <= (n-m)/istep; jj++) {
i = m+jj*istep;
j = i+mmax;
tempr = wr*FFT[j-1]-wi*FFT[j];
tempi = wr*FFT[j]+wi*FFT[j-1];
FFT[j-1] = FFT[i-1]-tempr;
FFT[j] = FFT[i]-tempi;
FFT[i-1] = FFT[i-1]+tempr;
FFT[i] = FFT[i]+tempi;
}
wtemp = wr;
wr = wr*wpr-wi*wpi+wr;
wi = wi*wpr+wtemp*wpi+wi;
}
mmax = istep;
}
if( inverseFft )
for(i = 1; i <= 2*nn; i++)
FFT[i-1] = FFT[i-1]/nn;
if( !inverseFft ) {
twpr = -2.0 * Pow( Sin( 0.5 * ttheta ), 2 );
twpi = Sin(ttheta);
twr = 1.0+twpr;
twi = twpi;
for(i = 2; i <= NS/4+1; i++) {
i1 = i+i-2;
i2 = i1+1;
i3 = NS+1-i2;
i4 = i3+1;
wrs = twr;
wis = twi;
h1r = c1*(FFT[i1]+FFT[i3]);
h1i = c1*(FFT[i2]-FFT[i4]);
h2r = -c2*(FFT[i2]+FFT[i4]);
h2i = c2*(FFT[i1]-FFT[i3]);
FFT[i1] = h1r+wrs*h2r-wis*h2i;
FFT[i2] = h1i+wrs*h2i+wis*h2r;
FFT[i3] = h1r-wrs*h2r+wis*h2i;
FFT[i4] = -h1i+wrs*h2i+wis*h2r;
twtemp = twr;
twr = twr*twpr-twi*twpi+twr;
twi = twi*twpr+twtemp*twpi+twi;
}
h1r = FFT[0];
FFT[0] = h1r+FFT[1]; // DC
FFT[1] = h1r-FFT[1]; // FS/2 (NYQUIST)
}
return;
}
In matlab try using fftshift(fft(...)). Matlab doesn't automatically shift the spectrum after the FFT is called which is why they implemented the fftshift() function.
It is simply a matlab formatting thing. Basically, matlab arrange Fourier transform in following order
DC, (DC-1), .... (Nyquist-1), -Nyquist, -Nyquist+1, ..., DC-1
Let's say you have a 8 point sequence: [1 2 3 1 4 5 1 3]
In your signal processing class, your professor probably draws the Fourier spectrum based on a Cartesian system ( negative -> positive for x axis); So your DC should be located at 0 (the 4th position in your fft sequence, assuming position index here is 0-based) on your x axis.
In matlab, the DC is the very first element in the fft sequence, so you need to to fftshit() to swap the first half and second half of the fft sequence such that DC will be located at 4th position (position is 0-based indexed)
I am attaching a graph here so you may have a visual:
where a is the original 8-point sequence; FT(a) is the Fourier transform of a.
The matlab code is here:
a = [1 2 3 1 4 5 1 3];
A = fft(a);
N = length(a);
x = -N/2:N/2-1;
figure
subplot(3,1,1), stem(x, a,'o'); title('a'); xlabel('time')
subplot(3,1,2), stem(x, fftshift(abs(A),2),'o'); title('FT(a) in signal processing'); xlabel('frequency')
subplot(3,1,3), stem(x, abs(A),'o'); title('FT(a) in matlab'); xlabel('frequency')
Related
Using Matlab I am trying to construct a neural network that can classify handwritten digits that are 30x30 pixels. I use backpropagation to find the correct weights and biases. The network starts with 900 inputs, then has 2 hidden layers with 16 neurons and it ends with 10 outputs. Each output neuron has a value between 0 and 1 that represents the belief that the input should be classified as a certain digit. The problem is that after training, the output becomes almost indifferent to the input and it goes towards a uniform belief of 0.1 for each output.
My approach is to take each image with 30x30 pixels and reshape it to be vectors of 900x1 (note that 'Images_vector' is already in the vector format when it is loaded). The weights and biases are initiated with random values between 0 and 1. I am using stochastic gradiƫnt descent to update the weights and biases with 10 randomly selected samples per batch. The equations are as described by Nielsen.
The script is as follows.
%% Inputs
numberofbatches = 1000;
batchsize = 10;
alpha = 1;
cutoff = 8000;
layers = [900 16 16 10];
%% Initialization
rng(0);
load('Images_vector')
Images_vector = reshape(Images_vector', 1, 10000);
labels = [ones(1,1000) 2*ones(1,1000) 3*ones(1,1000) 4*ones(1,1000) 5*ones(1,1000) 6*ones(1,1000) 7*ones(1,1000) 8*ones(1,1000) 9*ones(1,1000) 10*ones(1,1000)];
newOrder = randperm(10000);
Images_vector = Images_vector(newOrder);
labels = labels(newOrder);
images_training = Images_vector(1:cutoff);
images_testing = Images_vector(cutoff + 1:10000);
w = cell(1,length(layers) - 1);
b = cell(1,length(layers));
dCdw = cell(1,length(layers) - 1);
dCdb = cell(1,length(layers));
for i = 1:length(layers) - 1
w{i} = rand(layers(i+1),layers(i));
b{i+1} = rand(layers(i+1),1);
end
%% Learning process
batches = randi([1 cutoff - batchsize],1,numberofbatches);
cost = zeros(numberofbatches,1);
c = 1;
for batch = batches
for i = 1:length(layers) - 1
dCdw{i} = zeros(layers(i+1),layers(i));
dCdb{i+1} = zeros(layers(i+1),1);
end
for n = batch:batch+batchsize
y = zeros(10,1);
disp(labels(n))
y(labels(n)) = 1;
% Network
a{1} = images_training{n};
z{2} = w{1} * a{1} + b{2};
a{2} = sigmoid(0, z{2});
z{3} = w{2} * a{2} + b{3};
a{3} = sigmoid(0, z{3});
z{4} = w{3} * a{3} + b{4};
a{4} = sigmoid(0, z{4});
% Cost
cost(c) = sum((a{4} - y).^2) / 2;
% Gradient
d{4} = (a{4} - y) .* sigmoid(1, z{4});
d{3} = (w{3}' * d{4}) .* sigmoid(1, z{3});
d{2} = (w{2}' * d{3}) .* sigmoid(1, z{2});
dCdb{4} = dCdb{4} + d{4} / 10;
dCdb{3} = dCdb{3} + d{3} / 10;
dCdb{2} = dCdb{2} + d{2} / 10;
dCdw{3} = dCdw{3} + (a{3} * d{4}')' / 10;
dCdw{2} = dCdw{2} + (a{2} * d{3}')' / 10;
dCdw{1} = dCdw{1} + (a{1} * d{2}')' / 10;
c = c + 1;
end
% Adjustment
b{4} = b{4} - dCdb{4} * alpha;
b{3} = b{3} - dCdb{3} * alpha;
b{2} = b{2} - dCdb{2} * alpha;
w{3} = w{3} - dCdw{3} * alpha;
w{2} = w{2} - dCdw{2} * alpha;
w{1} = w{1} - dCdw{1} * alpha;
end
figure
plot(cost)
ylabel 'Cost'
xlabel 'Batches trained on'
With the sigmoid function being the following.
function y = sigmoid(derivative, x)
if derivative == 0
y = 1 ./ (1 + exp(-x));
else
y = sigmoid(0, x) .* (1 - sigmoid(0, x));
end
end
Other than this I have also tried to have 1 of each digit in each batch, but this gave the same result. Also I have tried varying the batch size, the number of batches and alpha, but with no success.
Does anyone know what I am doing wrong?
Correct me if I'm wrong: You have 10000 samples in you're data, which you divide into 1000 batches of 10 samples. Your training process consists of running over these 10000 samples once.
This might be too little, normally your training process consists of several epochs (one epoch = iterating over every sample once). You can try going over your batches multiple times.
Also for 900 inputs your network seems small. Try it with more neurons in the second layer. Hope it helps!
I have a video and I have made a Sobel mask for it on MATLAB. Now I have to apply that Sobel mask on each frame of the video by reading each frame through for loop. The process is something like:
Step 1: Reading frame.
step 2: Converting it to grayscale using rgb2gray.
Step 3: Converting it to double.
Here, after applying the mask when I try to write the frame on the resultant video.avi file, I get the following error:
"Frames of type double must be in the range of 0 to 1"
What is wrong with my code? The code I wrote is shown below:
vid = VideoReader('me.mp4');
frames = read(vid);
total = get(vid, 'NumberOfFrames');
write = VideoWriter('me.avi');
open(write);
mask1 = [-1 -2 -1; 0 0 0; 1 2 1]; % Horizontal mask
mask2 = [-1 0 1; -2 0 2; -1 0 1]; %Vertical Mask
for k = 1 : 125
image = frames(:,:,:,k);
obj = image;
obj1 = rgb2gray(obj);
obj2=double(obj1);
for row = 2 : size(obj2, 1) - 1
for col = 2 : size(obj2, 2) - 1
c1 = obj2(row - 1, col - 1) * mask1(1 ,1);
c2 = obj2(row - 1, col) * mask1(1 ,2);
c3 = obj2(row - 1, col + 1) * mask1(1 ,3);
c4 = obj2(row, col - 1)*mask1(2, 1);
c5 = obj2(row, col)*mask1(2, 2);
c6 = obj2(row, col + 1)*mask1(2, 3);
c7 = obj2(row + 1, col - 1)*mask1(3,1);
c8 = obj2(row + 1, col)*mask1(3,2);
c9 = obj2(row + 1, col + 1)*mask1(3,3);
c11 = obj2(row - 1, col - 1)*mask2(1 , 1);
c22 = obj2(row, col - 1)*mask2(2, 1);
c33 = obj2(row + 1, col - 1)*mask2(3, 1);
c44 = obj2(row -1, col)*mask2(1, 2);
c55 = obj2(row, col)*mask2(2 , 2);
c66 = obj2(row +1, col)*mask2(2 , 3);
c77 = obj2(row - 1, col + 1)*mask2(1 , 3);
c88 = obj2(row, col +1)*mask2(2 , 3);
c99 = obj2(row + 1, col + 1)*mask2(3 , 3);
result = c1 + c2 + c3 +c4 +c5+ c6+ c7+ c8 +c9;
result2 = c11 + c22 + c33 + c44 + c55 + c66 + c77 + c88 + c99;
%result = double(result);
%result2 = double(result2);
rim1(row, col) = ((result^2+result2^2) *1/2);
rim2(row, col) = atan(result/result2);
end
end
writeVideo(write, rim2); %This line has the problem with rim2 as rim2 is the frame i'm trying to write on the video file.
end
close(write);
rim2 has range [-pi/2, pi/2] at the end, which is not compatible with the write function which expects [0,1] range.
Convert it to [0,1] range using the mat2gray function, i.e.
writeVideo(write, mat2gray(rim2));
Your code will then work as expected (confirmed on my machine).
By the way, this doesn't affect your code, but presumably you meant to do im2double(A) rather than double(A). The former produces a "proper" grayscale image in the range [0,1], whereas the latter simply converts your uint8 image in the range [0,255] to double format (i.e. [0.0, 255.0]).
The line of rim2 inside your double for loop is using atan, which will generate values that are both positive and negative - from -pi/2 to +pi/2 exactly. rim2 is expected to have values that are only between [0,1]. I can't figure out what exactly you're doing, but it looks like you're calculating the magnitude and gradient angle at each pixel location. If you want to calculate the magnitude, you have to take the square root of the result, not simply multiply by 1/2. The calculation of the gradient (... or even the whole Sobel filter calculation...) is very funny.
I'll just assume this is working for your purposes so I'm not sure how to change the output of rim2for suitable display but perhaps you could scale it to the range of [0,1] before you write the video so that it's within this range.
Something like this would work before you write the frame:
rim2 = (rim2 - min(rim2(:))) / (max(rim2(:)) - min(rim2(:)));
writeVideo(write, rim2);
The above is your typical min-max normalization that is seen in practice. Specifically, the above will ensure that the smallest value is 0 while the largest value is 1 per frame. If you want to be consistent over all frames, simply add pi/2 then divide by pi. This assumes that the minimum is -1 and the maximum is +1 over all frames however.
rim2 = (rim2 + pi/2) / pi;
writeVideo(write, rim2);
However, I suspect you want to write the magnitude to file, not the angle. Therefore, replace the video writing with rim1 as the frame to write instead of rim2, then normalize after. Make sure your gradient calculation is correct though:
rim1(row, col) = ((result^2+result2^2)^(1/2));
% or use sqrt:
% rim1(row, col) = sqrt(result^2 + result2^2);
Now write to file:
rim1 = (rim1 - min(rim1(:))) / (max(rim1(:)) - min(rim1(:)));
writeVideo(write, rim1);
However, if I can provide a method of efficiency, don't use for loops to compute the gradient and angle. Use conv2 and ensure you use the 'same' flag or imfilter from the image processing toolbox to perform the filtering for you, then calculate the gradient and angle vectorized. Also, convert to grayscale and cast your frame in one go in the main loop. I'll assume you have the image processing toolbox as having the computer vision toolbox (you have this as you're using a VideoWriter object) together with the image processing toolbox is what most people have:
vid = VideoReader('me.mp4');
frames = read(vid);
total = get(vid, 'NumberOfFrames');
write = VideoWriter('me.avi');
open(write);
mask1 = [-1 -2 -1; 0 0 0; 1 2 1]; % Horizontal mask
mask2 = [-1 0 1; -2 0 2; -1 0 1]; %Vertical Mask
for k = 1 : 125
obj2 = double(rgb2gray(frames(:,:,:,k))); % New
grad1 = imfilter(obj2, mask1); % New
grad2 = imfilter(obj2, mask2); % New
rim1 = sqrt(grad1.^2 + grad2.^2); % New
rim2 = atan2(grad1, grad2); % New
% Normalize
rim2 = (rim2 - min(rim2(:))) / (max(rim2(:)) - min(rim2(:)));
writeVideo(write, rim2);
end
close(write);
I am trying to implement a simple pixel level center-surround image enhancement. Center-surround technique makes use of statistics between the center pixel of the window and the surrounding neighborhood as a means to decide what enhancement needs to be done. In the code given below I have compared the center pixel with average of the surrounding information and based on that I switch between two cases to enhance the contrast. The code that I have written is as follows:
im = normalize8(im,1); %to set the range of pixel from 0-255
s1 = floor(K1/2); %K1 is the size of the window for surround
M = 1000; %is a constant value
out1 = padarray(im,[s1,s1],'symmetric');
out1 = CE(out1,s1,M);
out = (out1(s1+1:end-s1,s1+1:end-s1));
out = normalize8(out,0); %to set the range of pixel from 0-1
function [out] = CE(out,s,M)
B = 255;
out1 = out;
for i = s+1 : size(out,1) - s
for j = s+1 : size(out,2) - s
temp = out(i-s:i+s,j-s:j+s);
Yij = out1(i,j);
Sij = (1/(2*s+1)^2)*sum(sum(temp));
if (Yij>=Sij)
Aij = A(Yij-Sij,M);
out1(i,j) = ((B + Aij)*Yij)/(Aij+Yij);
else
Aij = A(Sij-Yij,M);
out1(i,j) = (Aij*Yij)/(Aij+B-Yij);
end
end
end
out = out1;
function [Ax] = A(x,M)
if x == 0
Ax = M;
else
Ax = M/x;
end
The code does the following things:
1) Normalize the image to 0-255 range and pad it with additional elements to perform windowing operation.
2) Calls the function CE.
3) In the function CE obtain the windowed image(temp).
4) Find the average of the window (Sij).
5) Compare the center of the window (Yij) with the average value (Sij).
6) Based on the result of comparison perform one of the two enhancement operation.
7) Finally set the range back to 0-1.
I have to run this for multiple window size (K1,K2,K3, etc.) and the images are of size 1728*2034. When the window size is selected as 100, the time consumed is very high.
Can I use vectorization at some stage to reduce the time for loops?
The profiler result (for window size 21) is as follows:
The profiler result (for window size 100) is as follows:
I have changed the code of my function and have written it without the sub-function. The code is as follows:
function [out] = CE(out,s,M)
B = 255;
Aij = zeros(1,2);
out1 = out;
n_factor = (1/(2*s+1)^2);
for i = s+1 : size(out,1) - s
for j = s+1 : size(out,2) - s
temp = out(i-s:i+s,j-s:j+s);
Yij = out1(i,j);
Sij = n_factor*sum(sum(temp));
if Yij-Sij == 0
Aij(1) = M;
Aij(2) = M;
else
Aij(1) = M/(Yij-Sij);
Aij(2) = M/(Sij-Yij);
end
if (Yij>=Sij)
out1(i,j) = ((B + Aij(1))*Yij)/(Aij(1)+Yij);
else
out1(i,j) = (Aij(2)*Yij)/(Aij(2)+B-Yij);
end
end
end
out = out1;
There is a slight improvement in the speed from 93 sec to 88 sec. Suggestions for any other improvements to my code are welcomed.
I have tried to incorporate the suggestions given to replace sliding window with convolution and then vectorize the rest of it. The code below is my implementation and I'm not getting the result expected.
function [out_im] = CE_conv(im,s,M)
B = 255;
temp = ones(2*s,2*s);
temp = temp ./ numel(temp);
out1 = conv2(im,temp,'same');
out_im = im;
Aij = im-out1; %same as Yij-Sij
Aij1 = out1-im; %same as Sij-Yij
Mij = Aij;
Mij(Aij>0) = M./Aij(Aij>0); % if Yij>Sij Mij = M/Yij-Sij;
Mij(Aij<0) = M./Aij1(Aij<0); % if Yij<Sij Mij = M/Sij-Yij;
Mij(Aij==0) = M; % if Yij-Sij == 0 Mij = M;
out_im(Aij>=0) = ((B + Mij(Aij>=0)).*im(Aij>=0))./(Mij(Aij>=0)+im(Aij>=0));
out_im(Aij<0) = (Mij(Aij<0).*im(Aij<0))./ (Mij(Aij<0)+B-im(Aij<0));
I am not able to figure out where I'm going wrong.
A detailed explanation of what I'm trying to implement is given in the following paper:
Vonikakis, Vassilios, and Ioannis Andreadis. "Multi-scale image contrast enhancement." In Control, Automation, Robotics and Vision, 2008. ICARCV 2008. 10th International Conference on, pp. 856-861. IEEE, 2008.
I've tried to see if I could get those times down by processing with colfiltand nlfilter, since both are usually much faster than for-loops for sliding window image processing.
Both worked fine for relatively small windows. For an image of 2048x2048 pixels and a window of 10x10, the solution with colfilt takes about 5 seconds (on my personal computer). With a window of 21x21 the time jumped to 27 seconds, but that is still a relative improvement on the times displayed on the question. Unfortunately I don't have enough memory to colfilt using windows of 100x100, but the solution with nlfilter works, though taking about 120 seconds.
Here the code
Solution with colfilt:
function outval = enhancematrix(inputmatrix,M,B)
%Inputmatrix is a 2D matrix or column vector, outval is a 1D row vector.
% If inputmatrix is made of integers...
inputmatrix = double(inputmatrix);
%1. Compute S and Y
normFactor = 1 / (size(inputmatrix,1) + 1).^2; %Size of column.
S = normFactor*sum(inputmatrix,1); % Sum over the columns.
Y = inputmatrix(ceil(size(inputmatrix,1)/2),:); % Center row.
% So far we have all S and Y, one value per column.
%2. Compute A(abs(Y-S))
A = Afunc(abs(S-Y),M);
% And all A: one value per column.
%3. The tricky part. If Y(i)-S(i) > 0 do something.
doPositive = (Y > S);
doNegative = ~doPositive;
outval = zeros(1,size(inputmatrix,2));
outval(doPositive) = (B + A(doPositive) .* Y(doPositive)) ./ (A(doPositive) + Y(doPositive));
outval(doNegative) = (A(doNegative) .* Y(doNegative)) ./ (A(doNegative) + B - Y(doNegative));
end
function out = Afunc(x,M)
% Input x is a row vector. Output is another row vector.
out = x;
out(x == 0) = M;
out(x ~= 0) = M./x(x ~= 0);
end
And to call it, simply do:
M = 1000; B = 255; enhancenow = #(x) enhancematrix(x,M,B);
w = 21 % windowsize
result = colfilt(inputImage,[w w],'sliding',enhancenow);
Solution with nlfilter:
function outval = enhanceimagecontrast(neighbourhood,M,B)
%1. Compute S and Y
normFactor = 1 / (length(neighbourhood) + 1).^2;
S = normFactor*sum(neighbourhood(:));
Y = neighbourhood(ceil(size(neighbourhood,1)/2),ceil(size(neighbourhood,2)/2));
%2. Compute A(abs(Y-S))
test = (Y>=S);
A = Afunc(abs(Y-S),M);
%3. Return outval
if test
outval = ((B + A) * Y) / (A + Y);
else
outval = (A * Y) / (A + B - Y);
end
function aval = Afunc(x,M)
if (x == 0)
aval = M;
else
aval = M/x;
end
And to call it, simply do:
M = 1000; B = 255; enhancenow = #(x) enhanceimagecontrast(x,M,B);
w = 21 % windowsize
result = nlfilter(inputImage,[w w], enhancenow);
I didn't spend much time checking that everything is 100% correct, but I did see some nice contrast enhancement (hair looks particularly nice).
This answer is the implementation that was suggested by Peter. I debugged the implementation and presenting the final working version of the fast implementation.
function [out_im] = CE_conv(im,s,M)
B = 255;
im = ( im - min(im(:)) ) ./ ( max(im(:)) - min(im(:)) )*255;
h = ones(s,s)./(s*s);
out1 = imfilter(im,h,'conv');
out_im = im;
Aij = im-out1; %same as Yij-Sij
Aij1 = out1-im; %same as Sij-Yij
Mij = Aij;
Mij(Aij>0) = M./Aij(Aij>0); % if Yij>Sij Mij = M/(Yij-Sij);
Mij(Aij<0) = M./Aij1(Aij<0); % if Yij<Sij Mij = M/(Sij-Yij);
Mij(Aij==0) = M; % if Yij-Sij == 0 Mij = M;
out_im(Aij>=0) = ((B + Mij(Aij>=0)).*im(Aij>=0))./(Mij(Aij>=0)+im(Aij>=0));
out_im(Aij<0) = (Mij(Aij<0).*im(Aij<0))./ (Mij(Aij<0)+B-im(Aij<0));
out_im = ( out_im - min(out_im(:)) ) ./ ( max(out_im(:)) - min(out_im(:)) );
To call this use the following code
I = imread('pout.tif');
w_size = 51;
M = 4000;
output = CE_conv(I(:,:,1),w_size,M);
The output for the 'pout.tif' image is given below
The execution time for Bigger image and with 100*100 block size is around 5 secs with this implementation.
I have a 3D volume and a 2D image and an approximate mapping (affine transformation with no skwewing, known scaling, rotation and translation approximately known and need fitting) between the two. Because there is an error in this mapping and I would like to further register the 2D image to the 3D volume. I have not written code for registration purposes before, but because I can't find any programs or code to solve this I would like to try and do this. I believe the standard for registration is to optimize mutual information. I think this would also be suitable here, because the intensities are not equal between the two images. So I think I should make a function for the transformation, a function for the mutual information and a function for optimization.
I did find some Matlab code on a mathworks thread from two years ago, based on an article. The OP reports that she managed to get the code to work, but I'm not getting how she did that exactly. Also in the IP package for matlab there is an implementation, but I dont have that package and there does not seem to be an equivalent for octave. SPM is a program that uses matlab and has registration implemented, but does not cope with 2d to 3d registration. On the file exchange there is a brute force method that registers two 2D images using mutual information.
What she does is pass a multi planar reconstruction function and an similarity/error function into a minimization algorithm. But the details I don't quite understand. Maybe it would be better to start fresh:
load mri; volume = squeeze(D);
phi = 3; theta = 2; psi = 5; %some small angles
tx = 1; ty = 1; tz = 1; % some small translation
dx = 0.25, dy = 0.25, dz = 2; %different scales
t = [tx; ty; tz];
r = [phi, theta, psi]; r = r*(pi/180);
dims = size(volume);
p0 = [round(dims(1)/2);round(dims(2)/2);round(dims(3)/2)]; %image center
S = eye(4); S(1,1) = dx; S(2,2) = dy; S(3,3) = dz;
Rx=[1 0 0 0;
0 cos(r(1)) sin(r(1)) 0;
0 -sin(r(1)) cos(r(1)) 0;
0 0 0 1];
Ry=[cos(r(2)) 0 -sin(r(2)) 0;
0 1 0 0;
sin(r(2)) 0 cos(r(2)) 0;
0 0 0 1];
Rz=[cos(r(3)) sin(r(3)) 0 0;
-sin(r(3)) cos(r(3)) 0 0;
0 0 1 0;
0 0 0 1];
R = S*Rz*Ry*Rx;
%make affine matrix to rotate about center of image
T1 = ( eye(3)-R(1:3,1:3) ) * p0(1:3);
T = T1 + t; %add translation
A = R;
A(1:3,4) = T;
Rold2new = A;
Rnew2old = inv(Rold2new);
%the transformation
[xx yy zz] = meshgrid(1:dims(1),1:dims(2),1:1);
coordinates_axes_new = [xx(:)';yy(:)';zz(:)'; ones(size(zz(:)))'];
coordinates_axes_old = Rnew2old*coordinates_axes_new;
Xcoordinates = reshape(coordinates_axes_old(1,:), dims(1), dims(2), dims(3));
Ycoordinates = reshape(coordinates_axes_old(2,:), dims(1), dims(2), dims(3));
Zcoordinates = reshape(coordinates_axes_old(3,:), dims(1), dims(2), dims(3));
%interpolation/reslicing
method = 'cubic';
slice= interp3(volume, Xcoordinates, Ycoordinates, Zcoordinates, method);
%so now I have my slice for which I would like to find the correct position
% first guess for A
A0 = eye(4); A0(1:3,4) = T1; A0(1,1) = dx; A0(2,2) = dy; A0(3,3) = dz;
% this is pretty close to A
% now how would I fit the slice to the volume by changing A0 and examining some similarity measure?
% probably maximize mutual information?
% http://www.mathworks.com/matlabcentral/fileexchange/14888-mutual-information-computation/content//mi/mutualinfo.m
Ok I was hoping for someone else's approach, that would probably have been better than mine as I have never done any optimization or registration before. So I waited for Knedlsepps bounty to almost finish. But I do have some code thats working now. It will find a local optimum so the initial guess must be good. I wrote some functions myself, took some functions from the file exchange as is and I extensively edited some other functions from the file exchange. Now that I put all the code together to work as an example here, the rotations are off, will try and correct that. Im not sure where the difference in code is between the example and my original code, must have made a typo in replacing some variables and data loading scheme.
What I do is I take the starting affine transformation matrix, decompose it to an orthogonal matrix and an upper triangular matrix. I then assume the orthogonal matrix is my rotation matrix so I calculate the euler angles from that. I directly take the translation from the affine matrix and as stated in the problem I assume I know the scaling matrix and there is no shearing. So then I have all degrees of freedom for the affine transformation, which my optimisation function changes and constructs a new affine matrix from, applies it to the volume and calculates the mutual information. The matlab optimisation function patternsearch then minimises 1-MI/MI_max.
What I noticed when using it on my real data which are multimodal brain images is that it works much better on brain extracted images, so with the skull and tissue outside of the skull removed.
%data
load mri; volume = double(squeeze(D));
%transformation parameters
phi = 3; theta = 1; psi = 5; %some small angles
tx = 1; ty = 1; tz = 3; % some small translation
dx = 0.25; dy = 0.25; dz = 2; %different scales
t = [tx; ty; tz];
r = [phi, theta, psi]; r = r*(pi/180);
%image center and size
dims = size(volume);
p0 = [round(dims(1)/2);round(dims(2)/2);round(dims(3)/2)];
%slice coordinate ranges
range_x = 1:dims(1)/dx;
range_y = 1:dims(2)/dy;
range_z = 1;
%rotation
R = dofaffine([0;0;0], r, [1,1,1]);
T1 = ( eye(3)-R(1:3,1:3) ) * p0(1:3); %rotate about p0
%scaling
S = eye(4); S(1,1) = dx; S(2,2) = dy; S(3,3) = dz;
%translation
T = [[eye(3), T1 + t]; [0 0 0 1]];
%affine
A = T*R*S;
% first guess for A
r00 = [1,1,1]*pi/180;
R00 = dofaffine([0;0;0], r00, [1 1 1]);
t00 = T1 + t + ( eye(3) - R00(1:3,1:3) ) * p0;
A0 = dofaffine( t00, r00, [dx, dy, dz] );
[ t0, r0, s0 ] = dofaffine( A0 );
x0 = [ t0.', r0, s0 ];
%the transformation
slice = affine3d(volume, A, range_x, range_y, range_z, 'cubic');
guess = affine3d(volume, A0, range_x, range_y, range_z, 'cubic');
%initialisation
Dt = [1; 1; 1];
Dr = [2 2 2].*pi/180;
Ds = [0 0 0];
Dx = [Dt', Dr, Ds];
%limits
LB = x0-Dx;
UB = x0+Dx;
%other inputs
ref_levels = length(unique(slice));
Qref = imquantize(slice, ref_levels);
MI_max = MI_GG(Qref, Qref);
%patternsearch options
options = psoptimset('InitialMeshSize',0.03,'MaxIter',20,'TolCon',1e-5,'TolMesh',5e-5,'TolX',1e-6,'PlotFcns',{#psplotbestf,#psplotbestx});
%optimise
[x2, MI_norm_neg, exitflag_len] = patternsearch(#(x) AffRegOptFunc(x, slice, volume, MI_max, x0), x0,[],[],[],[],LB(:),UB(:),options);
%check
p0 = [round(size(volume)/2).'];
R0 = dofaffine([0;0;0], x2(4:6)-x0(4:6), [1 1 1]);
t1 = ( eye(3) - R0(1:3,1:3) ) * p0;
A2 = dofaffine( x2(1:3).'+t1, x2(4:6), x2(7:9) ) ;
fitted = affine3d(volume, A2, range_x, range_y, range_z, 'cubic');
overlay1 = imfuse(slice, guess);
overlay2 = imfuse(slice, fitted);
figure(101);
ax(1) = subplot(1,2,1); imshow(overlay1, []); title('pre-reg')
ax(2) = subplot(1,2,2); imshow(overlay2, []); title('post-reg');
linkaxes(ax);
function normed_score = AffRegOptFunc( x, ref_im, reg_im, MI_max, x0 )
t = x(1:3).';
r = x(4:6);
s = x(7:9);
rangx = 1:size(ref_im,1);
rangy = 1:size(ref_im,2);
rangz = 1:size(ref_im,3);
ref_levels = length(unique(ref_im));
reg_levels = length(unique(reg_im));
t0 = x0(1:3).';
r0 = x0(4:6);
s0 = x0(7:9);
p0 = [round(size(reg_im)/2).'];
R = dofaffine([0;0;0], r-r0, [1 1 1]);
t1 = ( eye(3) - R(1:3,1:3) ) * p0;
t = t + t1;
Ap = dofaffine( t, r, s );
reg_im_t = affine3d(reg_im, A, rangx, rangy, rangz, 'cubic');
Qref = imquantize(ref_im, ref_levels);
Qreg = imquantize(reg_im_t, reg_levels);
MI = MI_GG(Qref, Qreg);
normed_score = 1-MI/MI_max;
end
function [ varargout ] = dofaffine( varargin )
% [ t, r, s ] = dofaffine( A )
% [ A ] = dofaffine( t, r, s )
if nargin == 1
%affine to degrees of freedom (no shear)
A = varargin{1};
[T, R, S] = decompaffine(A);
r = GetEulerAngles(R(1:3,1:3));
s = [S(1,1), S(2,2), S(3,3)];
t = T(1:3,4);
varargout{1} = t;
varargout{2} = r;
varargout{3} = s;
elseif nargin == 3
%degrees of freedom to affine (no shear)
t = varargin{1};
r = varargin{2};
s = varargin{3};
R = GetEulerAngles(r); R(4,4) = 1;
S(1,1) = s(1); S(2,2) = s(2); S(3,3) = s(3); S(4,4) = 1;
T = eye(4); T(1,4) = t(1); T(2,4) = t(2); T(3,4) = t(3);
A = T*R*S;
varargout{1} = A;
else
error('incorrect number of input arguments');
end
end
function [ T, R, S ] = decompaffine( A )
%I assume A = T * R * S
T = eye(4);
R = eye(4);
S = eye(4);
%decompose in orthogonal matrix q and upper triangular matrix r
%I assume q is a rotation matrix and r is a scale and shear matrix
%matlab 2014 can force real solution
[q r] = qr(A(1:3,1:3));
R(1:3,1:3) = q;
S(1:3,1:3) = r;
% A*S^-1*R^-1 = T*R*S*S^-1*R^-1 = T*R*I*R^-1 = T*R*R^-1 = T*I = T
T = A*inv(S)*inv(R);
t = T(1:3,4);
T = [eye(4) + [[0 0 0;0 0 0;0 0 0;0 0 0],[t;0]]];
end
function [varargout]= GetEulerAngles(R)
assert(length(R)==3)
dims = size(R);
if min(dims)==1
rx = R(1); ry = R(2); rz = R(3);
R = [[ cos(ry)*cos(rz), -cos(ry)*sin(rz), sin(ry)];...
[ cos(rx)*sin(rz) + cos(rz)*sin(rx)*sin(ry), cos(rx)*cos(rz) - sin(rx)*sin(ry)*sin(rz), -cos(ry)*sin(rx)];...
[ sin(rx)*sin(rz) - cos(rx)*cos(rz)*sin(ry), cos(rz)*sin(rx) + cos(rx)*sin(ry)*sin(rz), cos(rx)*cos(ry)]];
varargout{1} = R;
else
ry=asin(R(1,3));
rz=acos(R(1,1)/cos(ry));
rx=acos(R(3,3)/cos(ry));
if nargout > 1 && nargout < 4
varargout{1} = rx;
varargout{2} = ry;
varargout{3} = rz;
elseif nargout == 1
varargout{1} = [rx ry rz];
else
error('wrong number of output arguments');
end
end
end
I asked a question a few days before but I guess it was a little too complicated and I don't expect to get any answer.
My problem is that I need to use ANN for classification. I've read that much better cost function (or loss function as some books specify) is the cross-entropy, that is J(w) = -1/m * sum_i( yi*ln(hw(xi)) + (1-yi)*ln(1 - hw(xi)) ); i indicates the no. data from training matrix X. I tried to apply it in MATLAB but I find it really difficult. There are couple things I don't know:
should I sum each outputs given all training data (i = 1, ... N, where N is number of inputs for training)
is the gradient calculated correctly
is the numerical gradient (gradAapprox) calculated correctly.
I have following MATLAB codes. I realise I may ask for trivial thing but anyway I hope someone can give me some clues how to find the problem. I suspect the problem is to calculate gradients.
Many thanks.
Main script:
close all
clear all
L = #(x) (1 + exp(-x)).^(-1);
NN = #(x,theta) theta{2}*[ones(1,size(x,1));L(theta{1}*[ones(size(x,1),1) x]')];
% theta = [10 -30 -30];
x = [0 0; 0 1; 1 0; 1 1];
y = [0.9 0.1 0.1 0.1]';
theta0 = 2*rand(9,1)-1;
options = optimset('gradObj','on','Display','iter');
thetaVec = fminunc(#costFunction,theta0,options,x,y);
theta = cell(2,1);
theta{1} = reshape(thetaVec(1:6),[2 3]);
theta{2} = reshape(thetaVec(7:9),[1 3]);
NN(x,theta)'
Cost function:
function [jVal,gradVal,gradApprox] = costFunction(thetaVec,x,y)
persistent index;
% 1 x x
% 1 x x
% 1 x x
% x = 1 x x
% 1 x x
% 1 x x
% 1 x x
m = size(x,1);
if isempty(index) || index > size(x,1)
index = 1;
end
L = #(x) (1 + exp(-x)).^(-1);
NN = #(x,theta) theta{2}*[ones(1,size(x,1));L(theta{1}*[ones(size(x,1),1) x]')];
theta = cell(2,1);
theta{1} = reshape(thetaVec(1:6),[2 3]);
theta{2} = reshape(thetaVec(7:9),[1 3]);
Dew = cell(2,1);
DewApprox = cell(2,1);
% Forward propagation
a0 = x(index,:)';
z1 = theta{1}*[1;a0];
a1 = L(z1);
z2 = theta{2}*[1;a1];
a2 = L(z2);
% Back propagation
d2 = 1/m*(a2 - y(index))*L(z2)*(1-L(z2));
Dew{2} = [1;a1]*d2;
d1 = [1;a1].*(1 - [1;a1]).*theta{2}'*d2;
Dew{1} = [1;a0]*d1(2:end)';
% NNRes = NN(x,theta)';
% jVal = -1/m*sum(NNRes-y)*NNRes*(1-NNRes);
jVal = -1/m*(a2 - y(index))*a2*(1-a2);
gradVal = [Dew{1}(:);Dew{2}(:)];
gradApprox = CalcGradApprox(0.0001);
index = index + 1;
function output = CalcGradApprox(epsilon)
output = zeros(size(gradVal));
for n=1:length(thetaVec)
thetaVecMin = thetaVec;
thetaVecMax = thetaVec;
thetaVecMin(n) = thetaVec(n) - epsilon;
thetaVecMax(n) = thetaVec(n) + epsilon;
thetaMin = cell(2,1);
thetaMax = cell(2,1);
thetaMin{1} = reshape(thetaVecMin(1:6),[2 3]);
thetaMin{2} = reshape(thetaVecMin(7:9),[1 3]);
thetaMax{1} = reshape(thetaVecMax(1:6),[2 3]);
thetaMax{2} = reshape(thetaVecMax(7:9),[1 3]);
a2min = NN(x(index,:),thetaMin)';
a2max = NN(x(index,:),thetaMax)';
jValMin = -1/m*(a2min-y(index))*a2min*(1-a2min);
jValMax = -1/m*(a2max-y(index))*a2max*(1-a2max);
output(n) = (jValMax - jValMin)/2/epsilon;
end
end
end
EDIT:
Below I present the correct version of my costFunction for those who may be interested.
function [jVal,gradVal,gradApprox] = costFunction(thetaVec,x,y)
m = size(x,1);
L = #(x) (1 + exp(-x)).^(-1);
NN = #(x,theta) L(theta{2}*[ones(1,size(x,1));L(theta{1}*[ones(size(x,1),1) x]')]);
theta = cell(2,1);
theta{1} = reshape(thetaVec(1:6),[2 3]);
theta{2} = reshape(thetaVec(7:9),[1 3]);
Delta = cell(2,1);
Delta{1} = zeros(size(theta{1}));
Delta{2} = zeros(size(theta{2}));
D = cell(2,1);
D{1} = zeros(size(theta{1}));
D{2} = zeros(size(theta{2}));
jVal = 0;
for in = 1:size(x,1)
% Forward propagation
a1 = [1;x(in,:)']; % added bias to a0
z2 = theta{1}*a1;
a2 = [1;L(z2)]; % added bias to a1
z3 = theta{2}*a2;
a3 = L(z3);
% Back propagation
d3 = a3 - y(in);
d2 = theta{2}'*d3.*a2.*(1 - a2);
Delta{2} = Delta{2} + d3*a2';
Delta{1} = Delta{1} + d2(2:end)*a1';
jVal = jVal + sum( y(in)*log(a3) + (1-y(in))*log(1-a3) );
end
D{1} = 1/m*Delta{1};
D{2} = 1/m*Delta{2};
jVal = -1/m*jVal;
gradVal = [D{1}(:);D{2}(:)];
gradApprox = CalcGradApprox(x(in,:),0.0001);
% Nested function to calculate gradApprox
function output = CalcGradApprox(x,epsilon)
output = zeros(size(thetaVec));
for n=1:length(thetaVec)
thetaVecMin = thetaVec;
thetaVecMax = thetaVec;
thetaVecMin(n) = thetaVec(n) - epsilon;
thetaVecMax(n) = thetaVec(n) + epsilon;
thetaMin = cell(2,1);
thetaMax = cell(2,1);
thetaMin{1} = reshape(thetaVecMin(1:6),[2 3]);
thetaMin{2} = reshape(thetaVecMin(7:9),[1 3]);
thetaMax{1} = reshape(thetaVecMax(1:6),[2 3]);
thetaMax{2} = reshape(thetaVecMax(7:9),[1 3]);
a3min = NN(x,thetaMin)';
a3max = NN(x,thetaMax)';
jValMin = 0;
jValMax = 0;
for inn=1:size(x,1)
jValMin = jValMin + sum( y(inn)*log(a3min) + (1-y(inn))*log(1-a3min) );
jValMax = jValMax + sum( y(inn)*log(a3max) + (1-y(inn))*log(1-a3max) );
end
jValMin = 1/m*jValMin;
jValMax = 1/m*jValMax;
output(n) = (jValMax - jValMin)/2/epsilon;
end
end
end
I've only had a quick eyeball over your code. Here are some pointers.
Q1
should I sum each outputs given all training data (i = 1, ... N, where
N is number of inputs for training)
If you are talking in relation to the cost function, it is normal to sum and normalise by the number of training examples in order to provide comparison between.
I can't tell from the code whether you have a vectorised implementation which will change the answer. Note that the sum function will only sum up a single dimension at a time - meaning if you have a (M by N) array, sum will result in a 1 by N array.
The cost function should have a scalar output.
Q2
is the gradient calculated correctly
The gradient is not calculated correctly - specifically the deltas look wrong. Try following Andrew Ng's notes [PDF] they are very good.
Q3
is the numerical gradient (gradAapprox) calculated correctly.
This line looks a bit suspect. Does this make more sense?
output(n) = (jValMax - jValMin)/(2*epsilon);
EDIT: I actually can't make heads or tails of your gradient approximation. You should only use forward propagation and small tweaks in the parameters to compute the gradient. Good luck!