Infinity value in evaluation step - matlab

I estimated a depth map using machine learning and i want to evaluate my results (using matlab). Depth map and depth true are images with 8 bits (normalized to [0 1] before evaluation). I used relative,rmse and log 10 error to do the step of evaluation.
function result = evaluate(estimated,depthTrue,number)
if(number == 1)
result = relative(estimated,depthTrue);
end
if (number == 2)
result = log10error(estimated,depthTrue);
end
if(number ==3)
result = rmse(estimated,depthTrue);
end
end
function result = relative(estimated,depthTrue)
result = mean(mean(abs(estimated - depthTrue)./depthTrue));
end
function result = log10error(estimated,depthTrue)
result = mean(mean(abs(log10(estimated) - log10(depthTrue))));
end
function result = rmse(estimated,depthTrue)
result = sqrt(mean(mean(abs(estimated - depthTrue).^2)));
end
When i try an evaluation with image i got infinity value (only log10error and relative). After search, i found that depthTrue and estimated can have 0 values.
log10(0)
ans =
-Inf
5/0
ans =
Inf
So, what should i do ?

I can think of several approaches to overcome this, depends on what best suits your needs. you can ignore the inf's or just replace them with other values. for example:
depthTrue = rand(4);
estimated = rand(4);
estimated(1,1) = 0;
% 1) ignore infs
absdiff = abs(log10(estimated(:)) - log10(depthTrue(:)));
result1 = mean( absdiff(~isinf(absdiff)) )
% 2) subtitute infs
veryHighNumber = 1e5;
absdiff(isinf(absdiff)) = veryHighNumber;
result2 = mean( absdiff )
% 3) subtitute zeros
verySmallNumber = 1e-5;
depthTrue(depthTrue == 0) = verySmallNumber;
estimated(estimated == 0) = verySmallNumber;
absdiff = abs(log10(estimated(:)) - log10(depthTrue(:)));
result3 = mean( absdiff )

Related

Calculating Error Rates for QPSK(for AWGN channel) in MATLAB

I want to simulate the QPSK for AWGN Channel and compare the errors I get with the theoritical ones on a plot. I want to do this on MATLAB for different SNR values from 1 to 10. When I plot this I get a huge difference between simulated and theoritical errors. I suspect I might have done with the demodulation part. I used atan function there but I am not %100 sure that it works. Can you help me?
M=100000;
snrdB=1:10;
snr=10.^(snrdB/10);
sError = zeros(1,10);%simulated error
tError = zeros(1,10);%theoritical error
for i=1:10
symbols = randi([1,4],1,M);
symbols(symbols == 1) = 1;
symbols(symbols == 2) = 1i;
symbols(symbols == 3) = -1;
symbols(symbols == 4) = -1i;
%calculating total energy
Eb = 0;
for k=1:M
Eb = Eb + abs(symbols(k).^2);
end
Eb = Eb/2;
var = abs(sqrt(Eb/(2*snr(i))));%variance
noise = var*rand(1,M) + var*1i*rand(1,M);%noise
r=symbols+noise;%adding noise
symbols1 = atan(r);%demodulation
error = abs((symbols - symbols1)./abs(symbols));%error
sError(i) = mean(error);
tError(i) = 2*qfunc(sqrt(2*snr(i)));%theoritical error
end
%comparison
semilogy(snrdB, tError,'x-')
hold on
semilogy(snrdB, sError,'o-')
xlabel('snr(dB)')
ylabel('error')
grid on
Something like this should work;
M=100000;
snrdB=1:10;
snr=10.^(snrdB/10);
sError = zeros(1,10);%simulated error
tError = zeros(1,10);%theoritical error
for i=1:10
symbols = randi([1,4],1,M);
symbols(symbols == 1) = 1;
symbols(symbols == 2) = 1i;
symbols(symbols == 3) = -1;
symbols(symbols == 4) = -1i;
var = 1/(2*sqrt(snr(i)));%variance
noise = var*(randn(1,M)) + var*j*(randn(1,M));
r=symbols+noise;%adding noise
c1 = exp(j*pi/4);
c2 = exp(-j*pi/4);
symbols1 = sign(real(symbols .* c1));
symbols2 = sign(imag(symbols .* c1));
symbols3 = sign(real(symbols .* c2));
symbols4 = sign(imag(symbols .* c2));
symbols1r = sign(real(r .* c1));
symbols2r = sign(imag(r .* c1));
symbols3r = sign(real(r .* c2));
symbols4r = sign(imag(r .* c2));
ind = find(symbols1==symbols1r & symbols2==symbols2r & symbols3==symbols3r & symbols4==symbols4r);
sError(i) = (M-length(ind))/M;
tError(i) = 2*qfunc(sqrt(2*snr(i)));%theoritical error
end
%comparison
semilogy(snrdB, tError,'x-')
hold on
semilogy(snrdB, sError,'o-')
xlabel('snr(dB)')
ylabel('error')
grid on
I also think something is wrong with demodulation. You shouldn't use atan here. I suggest you should work on real and imaginary parts seperately. Delete the line with atan and replace with these lines:
resymbols = real(r);
imsymbols = imag(r);
symbols1 = resymbols + 1i*imsymbols;
Please keep in mind, that it is rather uncommon to use the following
constellation for QPSK
{1, -1, 1j, -1j}
Usually you use:
{1+1j, 1-1j, -1+1j, -1-1j}

Using custom functions in blockproc in Matlab

I wanted to use my own (custom) function in blockproc function. But I am not able to figure out how to actually incorporate this. I am new to MATLAB.
In this function I am trying to embed info (the bits of LR_seq) into the block. I am selecting a pair of pixels at a time in the block and trying to embed my info into it.
This is the code of my function:
function dataencode(block_struct,LR_seq)
LR_seq = vec2mat(LR_seq,2,4);%changing it into a 2,4 mat to make it fit into the for loop
LR_mat = zeros(4,4);%adding zero columns in between to convert it into 4,4 mat to fit into the for loop
LR_mat(:,1:2:end) = LR_seq;
for r = 1 : 4
for c = [1 3]
x = uint16(block_struct.data(r,c)) ;
y = uint16(block_struct.data(r,c+1));
x1 = de2bi(x,12); y1 = de2bi(y,12);
if(~(2*x - y == 1) | ~(2*y - x == 1) | ~(2*x - y == 255) | ~(2*y - x == 255))
if((x1(1) == 0) & (y1(1) == 0)) %because in maltab lsb comes first
xt = (2*x - y) ; yt = (2*y - x);
xt_bin = de2bi(xt,12); xt_bin(1) = 1; xtd = bi2de(xt_bin);block_struct.data(r,c) =xtd;
yt_bin = de2bi(yt,12) ; yt_bin(1) = LR_mat(r,c); ytd = bi2de(yt_bin); block_struct.data(r,c+1) = ytd;
elseif((x1(1) == 1) | (y1(1) == 1))
x1(1) = 0 ; xtd = bi2de(x1) ; block.data(r,c) = xtd;
y1(1) = LR_mat(r,c); ytd = bi2de(y1) ; block_struct.data(r,c+1) = ytd;
end
else
x1(1) = 0; xtd = bi2de(x1) ; block.data(r,c) = xtd;
ytd = bi2de(y1); block_struct.data(r,c+1) = ytd;
end
end
end
end
M is a RGB 256x256 image in which I want to embed the sequence of bits LR_seq.
I am trying to call this function in the command window but it is giving me a empty matrix, please help out. This is my code for calling the function
for ii = 1 : 8 : size(LR_seq)
myfunc = #(block_struct) dataencode(block_struct,LR_seq(ii : ii+7));
U_im_new = blockproc(M,[4 4],myfunc)
end
dataencode should take a block_struct in parameter. But your function myfunc_red calls the encode function with the data of the block_struct.
You may prefer to do this:
myfunc = #(block_struct) dataencode(block_struct,LR_seq(1 : 8));
U_im_r = blockproc(M(:,:,1),[4 4],myfunc);

High performance computation of mean 2D-Euclidian-distance

Let's say I have a position matrix P with dimensions 10x2, where the first column contains x values and second column the corresponding y values. I want the mean of the lengths of the positions. The way I have done this so far is with the following code:
avg = sum( sqrt( P(:,1).^2 + P(:,2).^2))/10);
I've been told that the integral function integral2 is much faster and more precise for this task. How can I use integral2 to compute the mean value?
Just so this question doesn't remain unanswered:
function q42372466(DO_SUM)
if ~nargin % nargin == 0
DO_SUM = true;
end
% Generate some data:
P = rand(2E7,2);
% Correctness:
R{1} = m1(P);
R{2} = m2(P);
R{3} = m3(P);
R{4} = m4(P);
R{5} = m5(P);
R{6} = m6(P);
for ind1 = 2:numel(R)
assert(abs(R{1}-R{ind1}) < 1E-10);
end
% Benchmark:
t(1) = timeit(#()m1(P));
t(2) = timeit(#()m2(P));
t(3) = timeit(#()m3(P));
t(4) = timeit(#()m4(P));
t(5) = timeit(#()m5(P));
t(6) = timeit(#()m6(P));
% Print timings:
disp(t);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Original method:
function out = m1(P)
if DO_SUM
out = sum( sqrt( P(:,1).^2 + P(:,2).^2))/max(size(P));
else
out = mean( sqrt( P(:,1).^2 + P(:,2).^2));
end
end
% pdist2 method:
function out = m2(P)
if DO_SUM
out = sum(pdist2([0,0],P))/max(size(P));
else
out = mean(pdist2([0,0],P));
end
end
% Shortened method #1:
function out = m3(P)
if DO_SUM
out = sum(sqrt(sum(P.*P,2)))/max(size(P));
else
out = mean(sqrt(sum(P.*P,2)));
end
end
% Shortened method #2:
function out = m4(P)
if DO_SUM
out = sum(sqrt(sum(P.^2,2)))/max(size(P));
else
out = mean(sqrt(sum(P.^2,2)));
end
end
% hypot
function out = m5(P)
if DO_SUM
out = sum(hypot(P(:,1),P(:,2)))/max(size(P));
else
out = mean(hypot(P(:,1),P(:,2)));
end
end
% (a+b)^2 formula , Divakar's idea
function out = m6(P)
% Since a^2 + b^2 = (a+b)^2 - 2ab,
if DO_SUM
out = sum(sqrt(sum(P,2).^2 - 2*prod(P,2)))/max(size(P));
else
out = mean(sqrt(sum(P,2).^2 - 2*prod(P,2)));
end
end
end
Typical result on my R2016b + Win10 x64:
>> q42372466(0) % with mean()
0.1165 0.1971 0.2167 0.2161 0.1719 0.2375
>> q42372466(1) % with sum()
0.1156 0.1979 0.2233 0.2181 0.1610 0.2357
Which means that your method is actually the best of the above, by a considerable margin! (Honestly - didn't expect that!)

How to reduce the time consumed by the for loop?

I am trying to implement a simple pixel level center-surround image enhancement. Center-surround technique makes use of statistics between the center pixel of the window and the surrounding neighborhood as a means to decide what enhancement needs to be done. In the code given below I have compared the center pixel with average of the surrounding information and based on that I switch between two cases to enhance the contrast. The code that I have written is as follows:
im = normalize8(im,1); %to set the range of pixel from 0-255
s1 = floor(K1/2); %K1 is the size of the window for surround
M = 1000; %is a constant value
out1 = padarray(im,[s1,s1],'symmetric');
out1 = CE(out1,s1,M);
out = (out1(s1+1:end-s1,s1+1:end-s1));
out = normalize8(out,0); %to set the range of pixel from 0-1
function [out] = CE(out,s,M)
B = 255;
out1 = out;
for i = s+1 : size(out,1) - s
for j = s+1 : size(out,2) - s
temp = out(i-s:i+s,j-s:j+s);
Yij = out1(i,j);
Sij = (1/(2*s+1)^2)*sum(sum(temp));
if (Yij>=Sij)
Aij = A(Yij-Sij,M);
out1(i,j) = ((B + Aij)*Yij)/(Aij+Yij);
else
Aij = A(Sij-Yij,M);
out1(i,j) = (Aij*Yij)/(Aij+B-Yij);
end
end
end
out = out1;
function [Ax] = A(x,M)
if x == 0
Ax = M;
else
Ax = M/x;
end
The code does the following things:
1) Normalize the image to 0-255 range and pad it with additional elements to perform windowing operation.
2) Calls the function CE.
3) In the function CE obtain the windowed image(temp).
4) Find the average of the window (Sij).
5) Compare the center of the window (Yij) with the average value (Sij).
6) Based on the result of comparison perform one of the two enhancement operation.
7) Finally set the range back to 0-1.
I have to run this for multiple window size (K1,K2,K3, etc.) and the images are of size 1728*2034. When the window size is selected as 100, the time consumed is very high.
Can I use vectorization at some stage to reduce the time for loops?
The profiler result (for window size 21) is as follows:
The profiler result (for window size 100) is as follows:
I have changed the code of my function and have written it without the sub-function. The code is as follows:
function [out] = CE(out,s,M)
B = 255;
Aij = zeros(1,2);
out1 = out;
n_factor = (1/(2*s+1)^2);
for i = s+1 : size(out,1) - s
for j = s+1 : size(out,2) - s
temp = out(i-s:i+s,j-s:j+s);
Yij = out1(i,j);
Sij = n_factor*sum(sum(temp));
if Yij-Sij == 0
Aij(1) = M;
Aij(2) = M;
else
Aij(1) = M/(Yij-Sij);
Aij(2) = M/(Sij-Yij);
end
if (Yij>=Sij)
out1(i,j) = ((B + Aij(1))*Yij)/(Aij(1)+Yij);
else
out1(i,j) = (Aij(2)*Yij)/(Aij(2)+B-Yij);
end
end
end
out = out1;
There is a slight improvement in the speed from 93 sec to 88 sec. Suggestions for any other improvements to my code are welcomed.
I have tried to incorporate the suggestions given to replace sliding window with convolution and then vectorize the rest of it. The code below is my implementation and I'm not getting the result expected.
function [out_im] = CE_conv(im,s,M)
B = 255;
temp = ones(2*s,2*s);
temp = temp ./ numel(temp);
out1 = conv2(im,temp,'same');
out_im = im;
Aij = im-out1; %same as Yij-Sij
Aij1 = out1-im; %same as Sij-Yij
Mij = Aij;
Mij(Aij>0) = M./Aij(Aij>0); % if Yij>Sij Mij = M/Yij-Sij;
Mij(Aij<0) = M./Aij1(Aij<0); % if Yij<Sij Mij = M/Sij-Yij;
Mij(Aij==0) = M; % if Yij-Sij == 0 Mij = M;
out_im(Aij>=0) = ((B + Mij(Aij>=0)).*im(Aij>=0))./(Mij(Aij>=0)+im(Aij>=0));
out_im(Aij<0) = (Mij(Aij<0).*im(Aij<0))./ (Mij(Aij<0)+B-im(Aij<0));
I am not able to figure out where I'm going wrong.
A detailed explanation of what I'm trying to implement is given in the following paper:
Vonikakis, Vassilios, and Ioannis Andreadis. "Multi-scale image contrast enhancement." In Control, Automation, Robotics and Vision, 2008. ICARCV 2008. 10th International Conference on, pp. 856-861. IEEE, 2008.
I've tried to see if I could get those times down by processing with colfiltand nlfilter, since both are usually much faster than for-loops for sliding window image processing.
Both worked fine for relatively small windows. For an image of 2048x2048 pixels and a window of 10x10, the solution with colfilt takes about 5 seconds (on my personal computer). With a window of 21x21 the time jumped to 27 seconds, but that is still a relative improvement on the times displayed on the question. Unfortunately I don't have enough memory to colfilt using windows of 100x100, but the solution with nlfilter works, though taking about 120 seconds.
Here the code
Solution with colfilt:
function outval = enhancematrix(inputmatrix,M,B)
%Inputmatrix is a 2D matrix or column vector, outval is a 1D row vector.
% If inputmatrix is made of integers...
inputmatrix = double(inputmatrix);
%1. Compute S and Y
normFactor = 1 / (size(inputmatrix,1) + 1).^2; %Size of column.
S = normFactor*sum(inputmatrix,1); % Sum over the columns.
Y = inputmatrix(ceil(size(inputmatrix,1)/2),:); % Center row.
% So far we have all S and Y, one value per column.
%2. Compute A(abs(Y-S))
A = Afunc(abs(S-Y),M);
% And all A: one value per column.
%3. The tricky part. If Y(i)-S(i) > 0 do something.
doPositive = (Y > S);
doNegative = ~doPositive;
outval = zeros(1,size(inputmatrix,2));
outval(doPositive) = (B + A(doPositive) .* Y(doPositive)) ./ (A(doPositive) + Y(doPositive));
outval(doNegative) = (A(doNegative) .* Y(doNegative)) ./ (A(doNegative) + B - Y(doNegative));
end
function out = Afunc(x,M)
% Input x is a row vector. Output is another row vector.
out = x;
out(x == 0) = M;
out(x ~= 0) = M./x(x ~= 0);
end
And to call it, simply do:
M = 1000; B = 255; enhancenow = #(x) enhancematrix(x,M,B);
w = 21 % windowsize
result = colfilt(inputImage,[w w],'sliding',enhancenow);
Solution with nlfilter:
function outval = enhanceimagecontrast(neighbourhood,M,B)
%1. Compute S and Y
normFactor = 1 / (length(neighbourhood) + 1).^2;
S = normFactor*sum(neighbourhood(:));
Y = neighbourhood(ceil(size(neighbourhood,1)/2),ceil(size(neighbourhood,2)/2));
%2. Compute A(abs(Y-S))
test = (Y>=S);
A = Afunc(abs(Y-S),M);
%3. Return outval
if test
outval = ((B + A) * Y) / (A + Y);
else
outval = (A * Y) / (A + B - Y);
end
function aval = Afunc(x,M)
if (x == 0)
aval = M;
else
aval = M/x;
end
And to call it, simply do:
M = 1000; B = 255; enhancenow = #(x) enhanceimagecontrast(x,M,B);
w = 21 % windowsize
result = nlfilter(inputImage,[w w], enhancenow);
I didn't spend much time checking that everything is 100% correct, but I did see some nice contrast enhancement (hair looks particularly nice).
This answer is the implementation that was suggested by Peter. I debugged the implementation and presenting the final working version of the fast implementation.
function [out_im] = CE_conv(im,s,M)
B = 255;
im = ( im - min(im(:)) ) ./ ( max(im(:)) - min(im(:)) )*255;
h = ones(s,s)./(s*s);
out1 = imfilter(im,h,'conv');
out_im = im;
Aij = im-out1; %same as Yij-Sij
Aij1 = out1-im; %same as Sij-Yij
Mij = Aij;
Mij(Aij>0) = M./Aij(Aij>0); % if Yij>Sij Mij = M/(Yij-Sij);
Mij(Aij<0) = M./Aij1(Aij<0); % if Yij<Sij Mij = M/(Sij-Yij);
Mij(Aij==0) = M; % if Yij-Sij == 0 Mij = M;
out_im(Aij>=0) = ((B + Mij(Aij>=0)).*im(Aij>=0))./(Mij(Aij>=0)+im(Aij>=0));
out_im(Aij<0) = (Mij(Aij<0).*im(Aij<0))./ (Mij(Aij<0)+B-im(Aij<0));
out_im = ( out_im - min(out_im(:)) ) ./ ( max(out_im(:)) - min(out_im(:)) );
To call this use the following code
I = imread('pout.tif');
w_size = 51;
M = 4000;
output = CE_conv(I(:,:,1),w_size,M);
The output for the 'pout.tif' image is given below
The execution time for Bigger image and with 100*100 block size is around 5 secs with this implementation.

How to use Neural network for non binary input and output

I tried to use the modified version of NN back propagation code by Phil Brierley
(www.philbrierley.com). When i try to solve the XOR problem it works perfectly. but when i try to solve a problem of the form output = x1^2 + x2^2 (ouput = sum of squares of input), the results are not accurate. i have scaled the input and ouput between -1 and 1. I get different results every time i run the same program (i understand its due to random wts initialization), but results are very different. i tried changing learning rate but still results converge.
have given the code below
%---------------------------------------------------------
% MATLAB neural network backprop code
% by Phil Brierley
%--------------------------------------------------------
clear; clc; close all;
%user specified values
hidden_neurons = 4;
epochs = 20000;
input = [];
for i =-10:2.5:10
for j = -10:2.5:10
input = [input;i j];
end
end
output = (input(:,1).^2 + input(:,2).^2);
output1 = output;
% Maximum input and output limit and scaling factors
m1 = -10; m2 = 10;
m3 = 0; m4 = 250;
c = -1; d = 1;
%Scale input and output
for i =1:size(input,2)
I = input(:,i);
scaledI = ((d-c)*(I-m1) ./ (m2-m1)) + c;
input(:,i) = scaledI;
end
for i =1:size(output,2)
I = output(:,i);
scaledI = ((d-c)*(I-m3) ./ (m4-m3)) + c;
output(:,i) = scaledI;
end
train_inp = input;
train_out = output;
%read how many patterns and add bias
patterns = size(train_inp,1);
train_inp = [train_inp ones(patterns,1)];
%read how many inputs and initialize learning rate
inputs = size(train_inp,2);
hlr = 0.1;
%set initial random weights
weight_input_hidden = (randn(inputs,hidden_neurons) - 0.5)/10;
weight_hidden_output = (randn(1,hidden_neurons) - 0.5)/10;
%Training
err = zeros(1,epochs);
for iter = 1:epochs
alr = hlr;
blr = alr / 10;
%loop through the patterns, selecting randomly
for j = 1:patterns
%select a random pattern
patnum = round((rand * patterns) + 0.5);
if patnum > patterns
patnum = patterns;
elseif patnum < 1
patnum = 1;
end
%set the current pattern
this_pat = train_inp(patnum,:);
act = train_out(patnum,1);
%calculate the current error for this pattern
hval = (tanh(this_pat*weight_input_hidden))';
pred = hval'*weight_hidden_output';
error = pred - act;
% adjust weight hidden - output
delta_HO = error.*blr .*hval;
weight_hidden_output = weight_hidden_output - delta_HO';
% adjust the weights input - hidden
delta_IH= alr.*error.*weight_hidden_output'.*(1-(hval.^2))*this_pat;
weight_input_hidden = weight_input_hidden - delta_IH';
end
% -- another epoch finished
%compute overall network error at end of each epoch
pred = weight_hidden_output*tanh(train_inp*weight_input_hidden)';
error = pred' - train_out;
err(iter) = ((sum(error.^2))^0.5);
%stop if error is small
if err(iter) < 0.001
fprintf('converged at epoch: %d\n',iter);
break
end
end
%Output after training
pred = weight_hidden_output*tanh(train_inp*weight_input_hidden)';
Y = m3 + (m4-m3)*(pred-c)./(d-c);
% Testing for a new set of input
input_test = [6 -3.1; 0.5 1; -2 3; 3 -2; -4 5; 0.5 4; 6 1.5];
output_test = (input_test(:,1).^2 + input_test(:,2).^2);
input1 = input_test;
%Scale input
for i =1:size(input1,2)
I = input1(:,i);
scaledI = ((d-c)*(I-m1) ./ (m2-m1)) + c;
input1(:,i) = scaledI;
end
%Predict output
train_inp1 = input1;
patterns = size(train_inp1,1);
bias = ones(patterns,1);
train_inp1 = [train_inp1 bias];
pred1 = weight_hidden_output*tanh(train_inp1*weight_input_hidden)';
%Rescale
Y1 = m3 + (m4-m3)*(pred1-c)./(d-c);
analy_numer = [output_test Y1']
plot(err)
This is the sample output i get for problem
state after 20000 epochs
analy_numer =
45.6100 46.3174
1.2500 -2.9457
13.0000 11.9958
13.0000 9.7097
41.0000 44.9447
16.2500 17.1100
38.2500 43.9815
if i run once more i get different results. as can be observed for small values of input i get totally wrong ans (negative ans not possible). for other values accuracy is still poor.
can someone tell what i am doing wrong and how to correct.
thanks
raman