8-tap daubechies wavelet decomposition in matlab - matlab

I have a code to implement 8-tap Daubechies wavelet decomposition . First I decompose in 4 levels and the reconstruct the original image from the coefficients. code is as give below
Im=imread('me.jpg');
% Perform wavelet transform using Daubechies filter bank, 4th order
[LL1,HL1,LH1,HH1] = dwt2(Im,'db8');
[LL2,HL2,LH2,HH2] = dwt2(LL1,'db8');
[LL3,HL3,LH3,HH3] = dwt2(LL2,'db8');
[LL4,HL4,LH4,HH4] = dwt2(LL3,'db8');
% inverse wavelet transform
[LL3] = idwt2(LL4, HL4, LH4, HH4,'db8');
[LL2] = idwt2(LL3, HL3, LH3, HH3,'db8');
[LL1] = idwt2(LL2, HL2, LH2, HH2,'db8');
[reconstructed] = idwt2(LL1, HL1, LH1, HH1,'db8');
Using the above code I am getting an error message as
??? Array dimensions must match for binary array op.
Error in ==> idwt2 at 93 x = upsconv2(a,{Lo_R,Lo_R},sx,dwtEXTM,shift)+
... % Approximation.
Error in ==> NoiseExtract_2 at 55 [LL2] = idwt2(LL3, HL3, LH3,
HH3,'db8');
At the inverse transform step the size of LL3,LL2 and LL1 are changed. How to solve this problem?

Related

How to interprete the regression plot obtained at the end of neural network regression for multiple outputs?

I have trained my Neural network model using MATLAB NN Toolbox. My network has multiple inputs and multiple outputs, 6 and 7 respectively, to be precise. I would like to clarify few questions based on it:-
The final regression plot showed at the end of the training shows a very good accuracy, R~0.99. However, since I have multiple outputs, I am confused as to which scatter plot does it represent? Shouldn't we have 7 target vs predicted plots for each of the output variable?
According to my knowledge, R^2 is a better method of commenting upon the accuracy of the model, whereas MATLAB reports R in its plot. Do I treat that R as R^2 or should I square the reported R value to obtain R^2.
I have generated the Matlab Script containing weight, bias and activation functions, as a final Result of the training. So shouldn't I be able to simply give my raw data as input and obtain the corresponding predicted output. I gave the exact same training set using the indices Matlab chose for training (to cross check), and plotted the predicted output vs actual output, but the result is not at all good. Definitely, not along the lines of R~0.99. Am I doing anything wrong?
code:
function [y1] = myNeuralNetworkFunction_2(x1)
%MYNEURALNETWORKFUNCTION neural network simulation function.
% X = [torque T_exh lambda t_Spark N EGR];
% Y = [O2R CO2R HC NOX CO lambda_out T_exh2];
% Generated by Neural Network Toolbox function genFunction, 17-Dec-2018 07:13:04.
%
% [y1] = myNeuralNetworkFunction(x1) takes these arguments:
% x = Qx6 matrix, input #1
% and returns:
% y = Qx7 matrix, output #1
% where Q is the number of samples.
%#ok<*RPMT0>
% ===== NEURAL NETWORK CONSTANTS =====
% Input 1
x1_step1_xoffset = [-24;235.248;0.75;-20.678;550;0.799];
x1_step1_gain = [0.00353982300884956;0.00284355877067267;6.26959247648903;0.0275865874012055;0.000366568914956012;0.0533831576137729];
x1_step1_ymin = -1;
% Layer 1
b1 = [1.3808996210168685;-2.0990163849711894;0.9651733083552595;0.27000953282929346;-1.6781835509820286;-1.5110463684800366;-3.6257438832309905;2.1569498669085361;1.9204156230460485;-0.17704342477904209];
IW1_1 = [-0.032892214008082517 -0.55848270745152429 -0.0063993424771670616 -0.56161004933654057 2.7161844536020197 0.46415317073346513;-0.21395624254052176 -3.1570133640176681 0.71972178875396853 -1.9132557838515238 1.3365248285282931 -3.022721627052706;-1.1026780445896862 0.2324603066452392 0.14552308208231421 0.79194435276493658 -0.66254679969168417 0.070353201192052434;-0.017994515838487352 -0.097682677816992206 0.68844109281256027 -0.001684535122025588 0.013605622123872989 0.05810686279306107;0.5853667840629273 -2.9560683084876329 0.56713425120259764 -2.1854386350040116 1.2930115031659106 -2.7133159265497957;0.64316656469750333 -0.63667017646313084 0.50060179040086761 -0.86827897068177973 2.695456517458648 0.16822164719859456;-0.44666821007466739 4.0993786464616679 -0.89370838440321498 3.0445073606237933 -3.3015566360833453 -4.492874075961689;1.8337574137485424 2.6946232855369989 1.1140472073136622 1.6167763205944321 1.8573696127039145 -0.81922672766933646;-0.12561950922781362 3.0711045035224349 -0.6535751823440773 2.0590707752473199 -1.3267693770634292 2.8782780742777794;-0.013438026967107483 -0.025741311825949621 0.45460734966889638 0.045052447491038108 -0.21794568374100454 0.10667240367191703];
% Layer 2
b2 = [-0.96846557414356171;-0.2454718918618051;-0.7331628718025488;-1.0225195290982099;0.50307202195645395;-0.49497234988401961;-0.21817117469133171];
LW2_1 = [-0.97716474643411022 -0.23883775971686808 0.99238069915206006 0.4147649511973347 0.48504023209224734 -0.071372217431684551 0.054177719330469304 -0.25963474838320832 0.27368380212104881 0.063159321947246799;-0.15570858147605909 -0.18816739764334323 -0.3793600124951475 2.3851961990944681 0.38355142531334563 -0.75308427071748985 -0.1280128732536128 -1.361052031781103 0.6021878865831336 -0.24725687748503239;0.076251356114485525 -0.10178293627600112 0.10151304376762409 -0.46453434441403058 0.12114876632815359 0.062856969143306296 -0.0019628163322658364 -0.067809039768745916 0.071731544062023825 0.65700427778446913;0.17887084584125315 0.29122649575978238 0.37255802759192702 1.3684190468992126 0.60936238465090853 0.21955911453674043 0.28477957899364675 -0.051456306721251184 0.6519451272106177 -0.64479205028051967;0.25743349663436799 2.0668075180209979 0.59610776847961111 -3.2609682919282603 1.8824214917530881 0.33542869933904396 0.03604272669356564 -0.013842766338427388 3.8534510207741826 2.2266745660915586;-0.16136175574939746 0.10407287099228898 -0.13902245286490234 0.87616472446622717 -0.027079111747601223 0.024812287505204988 -0.030101536834009103 0.043168268669541855 0.12172932035587079 -0.27074383434206573;0.18714562505165402 0.35267726325386606 -0.029241400610813449 0.53053853235049087 0.58880054832728757 0.047959541165126809 0.16152268183097709 0.23419456403348898 0.83166785128608967 -0.66765237856750781];
% Output 1
y1_step1_ymin = -1;
y1_step1_gain = [0.114200879346771;0.145581598485951;0.000139011547272197;0.000456244862967996;2.05816254143146e-05;5.27704485488127;0.00284355877067267];
y1_step1_xoffset = [-0.045;1.122;2.706;17.108;493.726;0.75;235.248];
% ===== SIMULATION ========
% Dimensions
Q = size(x1,1); % samples
% Input 1
x1 = x1';
xp1 = mapminmax_apply(x1,x1_step1_gain,x1_step1_xoffset,x1_step1_ymin);
% Layer 1
a1 = tansig_apply(repmat(b1,1,Q) + IW1_1*xp1);
% Layer 2
a2 = repmat(b2,1,Q) + LW2_1*a1;
% Output 1
y1 = mapminmax_reverse(a2,y1_step1_gain,y1_step1_xoffset,y1_step1_ymin);
y1 = y1';
end
% ===== MODULE FUNCTIONS ========
% Map Minimum and Maximum Input Processing Function
function y = mapminmax_apply(x,settings_gain,settings_xoffset,settings_ymin)
y = bsxfun(#minus,x,settings_xoffset);
y = bsxfun(#times,y,settings_gain);
y = bsxfun(#plus,y,settings_ymin);
end
% Sigmoid Symmetric Transfer Function
function a = tansig_apply(n)
a = 2 ./ (1 + exp(-2*n)) - 1;
end
% Map Minimum and Maximum Output Reverse-Processing Function
function x = mapminmax_reverse(y,settings_gain,settings_xoffset,settings_ymin)
x = bsxfun(#minus,y,settings_ymin);
x = bsxfun(#rdivide,x,settings_gain);
x = bsxfun(#plus,x,settings_xoffset);
end
The above one is the automatically generated code. The plot which I generated to cross-check the first variable is below:-
% X and Y are input and output - same as above
X_train = X(results.info1.train.indices,:);
y_train = Y(results.info1.train.indices,:);
out_train = myNeuralNetworkFunction_2(X_train);
scatter(y_train(:,1),out_train(:,1))
To answer your question about R: Yes, you should square R to get the R^2 value. In this case, they will be very close since R is very close to 1.
The graphs give the correlation between the estimated and real (target) values. So R is the strenght of the correlation. You can square it to find the R-square.
The graph you draw and matlab gave are not the graph of the same variables. The ranges or scales of the axes are very different.
First of all, is the problem you are trying to solve a regression problem? Or is it a classification problem with 7 classes converted to numeric? I assume this is a classification problem, as you are trying to get the success rate for each class.
As for your first question: According to the literature it is recommended to use the value "All: R". If you want to get the success rate of each of your classes, Precision, Recall, F-measure, FP rate, TP Rate, etc., which are valid in classification problems. values ​​you need to reach. There are many matlab documents for this (help ROC) and you can look at the details. All the values ​​I mentioned and which I think you actually want are obtained from the confusion matrix.
There is a good example of this.
[x,t] = simpleclass_dataset;
net = patternnet(10);
net = train(net,x,t);
y = net(x);
[c,cm,ind,per] = confusion(t,y)
I hope you will see what you want from the "nntraintool" window that appears when you run the code.
Your other questions have already been answered. Alternatively, you can consider using a machine learning algorithm with open source software such as Weka.

Unable to fit nonlinear curve to data in Matlab

I am trying to fit some data in Matlab to a Hill function of the form y = r^n/(r^n+K^n). I have data for r,y and I need to find K,n.
I tried two different approaches after reading the docs extensively - one uses fit from the CurveFitting Toolbox and the other uses lsqcurvefit from the Optimization Toolbox. I haven't had any success with either. What am I missing?
Here is my xdata and ydata:
xdata = logspace(-2,2,101);
ydata = [0.0981 0.1074 0.1177 0.1289 0.1411 0.1545 0.1692 0.1852 0.2027 ...
0.2219 0.2428 0.2656 0.2905 0.3176 0.3472 0.3795 0.4146 0.4528 ...
0.4944 0.5395 0.5886 0.6418 0.6994 0.7618 0.8293 0.9022 0.9808 ...
1.0655 1.1566 1.2544 1.3592 1.4713 1.5909 1.7183 1.8537 1.9972 ...
2.1490 2.3089 2.4770 2.6532 2.8371 3.0286 3.2272 3.4324 3.6437 ...
3.8603 4.0815 4.3065 4.5344 4.7642 4.9950 5.2258 5.4556 5.6833 ...
5.9082 6.1292 6.3457 6.5567 6.7616 6.9599 7.1511 7.3347 7.5105 ...
7.6783 7.8379 7.9893 8.1324 8.2675 8.3946 8.5139 8.6257 8.7301 ...
8.8276 8.9184 9.0029 9.0812 9.1539 9.2212 9.2834 9.3408 9.3939 ...
9.4427 9.4877 9.5291 9.5672 9.6022 9.6343 9.6638 9.6909 9.7157 ...
9.7384 9.7592 9.7783 9.7957 9.8117 9.8263 9.8397 9.8519 9.8630 ...
9.8732 9.8826];
'Fit' code:
HillEqn = '#(x,xdata)xdata.^x(1)./(xdata.^x(1)+x(2).^x(1))';
startPoints = [1 1];
fit(xdata',ydata',HillEqn,'Start',startPoints)
Error message:
Error using fittype>iTestCustomModelEvaluation (line 726)
Expression #(x,xdata)xdata.^x(1)./(xdata.^x(1)+x(2).^x(1)) is not a valid MATLAB
expression, has non-scalar coefficients, or cannot be evaluated:
Undefined function 'imag' for input arguments of type 'function_handle'.
'lsqcurvefit' code:
fun = #(x,xdata) xdata.^x(1)./(xdata.^x(1)+x(2).^x(1));
x0 = [1,1]; % Initial Parameter Estimates
x = lsqcurvefit(fun,x0,xdata,ydata);
Error message:
Error using snls (line 47)
Objective function is returning undefined values at initial point. lsqcurvefit cannot
continue.
First I think you need 3 variables to start from, because the hill function will be max of 1, and your data it is maxed at 10. So either normalize your data by doing ydata=ydata./max(ydata), or add a 3rd variable (which I did just for the demonstration). This is how I did it:
startPoints = [1 1 1];
s = fitoptions('Method','NonlinearLeastSquares',... %
'Lower',[0 0 0 ],...
'Upper',[inf inf inf],...
'Startpoint',startPoints);
HillEqn = fittype( 'x.^a1./(x.^a1+a2.^a1)*a3','options',s);
[ffun,gofrr] = fit(xdata(:),ydata(:),HillEqn);
yfit=feval(ffun,xdata(:)); %Fitted function
plot(xdata,ydata,'-bx',xdata,yfit,'-ro');
ffun =
General model:
ffun(x) = x.^a1./(x.^a1+a2.^a1)*a3
Coefficients (with 95% confidence bounds):
a1 = 1.004 (1.004, 1.004)
a2 = 0.9977 (0.9975, 0.9979)
a3 = 9.979 (9.978, 9.979)
Side note:
In your case what you really want to do is to transform you data by looking at Y=1./ydata instead, then fit, and then take another 1./Y to get the answer in the previous "hill" function representation. This is because your problem is bilinear in nature , so by going 1./ydata you get a bilinear relation, for which a polyfit of order 1 will do:
Y=1./ydata;
X = 1./xdata;
p=polyfit(X,Y,1);
plot(X,Y,'-bx',X,polyval(p,X),'-ro')

How to calculate MDA for face recognition in matlab

I want to do PCA and then MDA (Multiple Discriminative Analysis) in order to reduce the dimensions of the dataset from 99^2 to 49 (face recognition).
My first step was reducing dimensions from 99^2 to 50 by PCA. Now I want to use MDA to reduce from c to c-1 -> from 50 to 49.
I've tried this code but I get complex values in the 'Answer', which is wrong.
% calculate PCA
mat_mean = mean(trainData(:));
normalized_train = trainData - mat_mean;
A = normalized_train/std(normalized_train(:));
S1 = A * A';
[V,Z] = eigs(S2,50);
Wpca = A'*V*Z;
% calculate MDA
[Sb,Sw] = scattermat(Wpca);
Sb1=Wpca*Sb*Wpca';
Sw1=Wpca*Sw*Wpca';
[Answer,ready1] = eigs(Sb1,Sw1,49);
Any suggestions what am I doing wrong?
The reason is that "eigs" calculates the eigenvalues of the matrix, which includes SQRT in it... and I have negative values in Sb,Sw

Fourier Transform Of male and female voice

I'm doing fourier transform using matlab R2014a, first I have read two audio files of femal and male, then I initialized the magnitude and phase for each. A Task in my report requires to Mix female speech amplitude with phase spectrum of the other signal-male phase-, or viceversa, So I wrote a code and I keep getting this error:
Error using *
Inner matrix dimensions must agree.
out1 = Mag_Male*exp(1i*Phase_Fem);
And even using.*
Error in Untitled9 (line 183)
out1 = Mag_Male.*exp(1i*Phase_Fem);
or .* in both operators
The full error
>> Untitled9
Error using .*
Matrix dimensions must agree.
Error in Untitled9 (line 183)
out1 = Mag_Male.*(exp(1i.*Phase_Fem));
Output of m and f size using size function
code:
maleAudio_row = size(m);
femaleAudio_row = size(f);
display(maleAudio_row);
display(femaleAudio_row);
Output:
maleAudio_row =
119855 2
femaleAudio_row =
119070 1
although my other colleagues worked fine with them :(
This is my Code:
Fs = 11025;
Ts = 1/Fs;
t = 0:Ts:0.1;
[m, Fs]=audioread('hamid1.wav');
[f, Fs]=audioread('myvoice.wav');
player = audioplayer(m,Fs);
player2 = audioplayer(f,Fs);
%play(player2);
%---- Frquency Domain Sampling-----%
Fem = fft(f);
Phase_Fem = angle(Fem);
Mag_Fem = abs(Fem);
%-----------------------------------%
Male = fft(m);
Mag_Male = abs(Male);
Phase_Male = angle(Male);
%-----------------------------------%
out1 = Mag_Male*exp(1i*Phase_Fem); % this step for putting female phase on male mag.
out2 = ifft(out1); % this step is convert the previus step to time domain so i can
%play the audio
Nx = length(out2);
F0 = 1/(Ts*Nx2);
result = audioplayer(out2);
play(result);
Your 'hamid1.wav' is two-channel wav file whereas 'myvoice.wav' is one-channel wav. As mentioned in Matlab manual (http://nl.mathworks.com/help/matlab/ref/audioread.html)
Audio data in the file, returned as an m-by-n matrix, where m is the number of audio samples read and n is the number of audio channels in the file.
Just convert m to one channel as m = 0.5*(m(:,1)+m(:,2)), adjust other dimension and use .* product (as people suggested in the comments).
clear all;
m = randn(1000,2); %dummy signal
f = randn(999,1); %dummy signal
N = min(size(m,1),size(f,1));
Male = fft(0.5*(m(1:N,1)+m(1:N,2)));
Fem = fft(f(1:N,1));
Mag_Male = abs(Male);
Phase_Male = angle(Male);
Phase_Fem = angle(Fem);
Mag_Fem = abs(Fem);
out1 = Mag_Male.*exp(1i*Phase_Fem);
If you use a * it will try and do matrix multiplication. What you probably want to use is an element by element operator, which is a . before the *. This will multiply the first element in the vector with the first element in the other vector, the second with the second, etc. etc.
out1 = Mag_Male.*exp(1i*Phase_Fem);
This assumes that the result from your FFT is the same length. This will be the case if the original samples are the same length.

Issues in fitting data to linear model

Assuming a noiseless AR(1) process y(t)= a*y(t-1) . I have following conceptual questions and shall be glad for the clarification.
Q1 - Discrepancy between mathematical formulation and implementation - The mathematical formulation of AR model is in the form of y(t) = - summmation over i=1 to p[a*y(t-p)] + eta(t) where p=model order and eta(t) is a white gaussian noise. But when estimating coefficients using any method like arburg() or the least square, we simply call that function. I do not know if a white gaussian noise is implicitly added. Then, when we resolve the AR equation with the estimated coefficients, I have seen that the negative sign is not considered nor the noise term added.
What is the correct representation of AR model and how do I find the average coefficients over k number of trials when I have only a single sample of 1000 data points?
Q2 - Coding problem in How to simulate fitted_data for k number of trials and then find the residuals - I fitted a data "data" generated from unknown system and obtained the coefficient by
load('data.txt');
for trials = 1:10
model = ar(data,1,'ls');
original_data=data;
fitted_data(i)=coeff1*data(i-1); % **OR**
data(i)=coeff1*data(i-1);
fitted_data=data;
residual= original_data - fitted_data;
plot(original_data,'r'); hold on; plot(fitted_data);
end
When calculating residual is the fitted_data obtained as above by resolving the AR equation with the obtained coefficients? Matlab has a function for doing this but I wanted to make my own. So, after finding coefficients from the original data how do I resolve ? The coding above is incorrect. Attached is the plot of original data and the fitted_data.
If you model is simply y(n)= a*y(n-1) with scalar a, then here is the solution.
y = randn(10, 1);
a = y(1 : end - 1) \ y(2 : end);
y_estim = y * a;
residual = y - y_estim;
Of course, you should separate the data into train-test, and apply a on the test data. You can generalize this approach to y(n)= a*y(n-1) + b*y(n-2), etc.
Note that \ represents mldivide() function: mldivide
Edit:
% model: y[n] = c + a*y(n-1) + b*y(n-2) +...+z*y(n-n_order)
n_order = 3;
allow_offset = true; % alows c in the model
% train
y_train = randn(20,1); % from your data
[y_in, y_out] = shifted_input(y_train, n_order, allow_offset);
a = y_in \ y_out;
% now test
y_test = randn(20,1); % from your data
[y_in, y_out] = shifted_input(y_test, n_order, allow_offset);
y_estim = y_in * a; % same a
residual = y_out - y_estim;
here is shifted_input():
function [y_in, y_out] = shifted_input(y, n_order, allow_offset)
y_out = y(n_order + 1 : end);
n_rows = size(y, 1) - n_order;
y_in = nan(n_rows, n_order);
for k = 1 : n_order
y_in(:, k) = y(1 : n_rows);
y = circshift(y, -1);
end
if allow_offset
y_in = [y_in, ones(n_rows, 1)];
end
return
AR-type models can serve a number of purposes, including linear prediction, linear predictive coding, filtering noise. The eta(t) are not something we are interested in retaining, rather part of the point of the algorithms is to remove their influence to any extent possible by looking for persistent patterns in the data.
I have textbooks that, in the context of linear prediction, do not include the negative sign included in your expression prior to the sum. On the other hand Matlab's function lpcdoes:
Xp(n) = -A(2)*X(n-1) - A(3)*X(n-2) - ... - A(N+1)*X(n-N)
I recommend you look at function lpc if you haven't already, and at the examples from the documentation such as the following:
randn('state',0);
noise = randn(50000,1); % Normalized white Gaussian noise
x = filter(1,[1 1/2 1/3 1/4],noise);
x = x(45904:50000);
% Compute the predictor coefficients, estimated signal, prediction error, and autocorrelation sequence of the prediction error:
p = lpc(x,3);
est_x = filter([0 -p(2:end)],1,x); % Estimated signal
e = x - est_x; % Prediction error
[acs,lags] = xcorr(e,'coeff'); % ACS of prediction error
The estimated x is computed as est_x. Note how the example uses filter. Quoting the matlab doc again, filter(b,a,x) "is a "Direct Form II Transposed" implementation of the standard difference equation:
a(1)*y(n) = b(1)*x(n) + b(2)*x(n-1) + ... + b(nb+1)*x(n-nb)
- a(2)*y(n-1) - ... - a(na+1)*y(n-na)
which means that in the prior example est_x(n) is computed as
est_x(n) = -p(2)*x(n-1) -p(3)*x(n-2) -p(4)*x(n-3)
which is what you expect!
Edit:
As regards the function ar, the matlab documentation explains that the output coefficients have the same meaning as in the lp scenario discussed above.
The right way to evaluate the output of the AR model is to compute
data_armod(i)= -coeff(2)*data(i-1) -coeff(3)*data(i-2) -coeff(4)*data(i-3)
where coeff is the coefficient matrix returned with
model = ar(data,3,'ls');
coeff = model.a;