Simple denoising autoencoder for 1D data in Matlab - matlab

I'm trying to set up a simple denoising autoencoder with Matlab for 1D data. As currently there is no specialised input layer for 1D data the imageInputLayer() function has to be used:
function net = DenoisingAutoencoder(data)
[N, n] = size(data);
%setting up input
X = zeros([n 1 1 N]);
for i = 1:n
for j = 1:N
X(i, 1, 1, j) = data(j,i);
end
end
% noisy X : 1/10th of elements are set to 0
Xnoisy = X;
mask1 = (mod(randi(10, size(X)), 7) ~= 0);
Xnoisy = Xnoisy .* mask1;
layers = [imageInputLayer([n 1 1]) fullyConnectedLayer(n) regressionLayer()];
opts = trainingOptions('sgdm');
net = trainNetwork(X, Xnoisy, layers, opts);
However, the code fails with this error message:
The output size [1 1 n] of the last layer doesn't match the
response size [n 1 1].
Any thoughts on how should the input / layers should be reconfigured? If the fullyConnectedLayer is left out then the code runs fine, but obviously then I'm left without the hidden layer.

The target output should be a matrix, not a 4D tensor.
Here's a working version of the previous code:
function DenoisingAutoencoder(data)
[N, n] = size(data);
X = data;
Xoriginal = data;
Xout = data';
% corrupting the input
zeroMask = (mod(randi(100, size(X)), 99) ~= 0);
X = X + randn(size(X))*0.05;
X = X .* zeroMask;
X4D = reshape(X, [1 n 1 N]);
layers = [imageInputLayer([1 n]) fullyConnectedLayer(n) regressionLayer()];
opts = trainingOptions('sgdm');
net = trainNetwork(X4D, Xout, layers, opts);
R = predict(net, X4D)';

Related

FastICA Implementation.. Matlab

I have been working on a FastICA algorithm implementation using MatLab. Currently the code does not separate the signals as good as id like. I was wondering if anyone here could give me some advice on what I could do to fix this problem?
disp('*****Importing Signals*****');
s = [1,30000];
[m1,Fs1] = audioread('OSR_us_000_0034_8k.wav', s);
[f1,Fs2] = audioread('OSR_us_000_0017_8k.wav', s);
ss = size(f1,1);
n = 2;
disp('*****Mixing Signals*****');
A = randn(n,n); %developing mixing matrix
x = A*[m1';f1']; %A*x
m_x = sum(x, n)/ss; %mean of x
xx = x - repmat(m_x, 1, ss); %centering the matrix
c = cov(x');
sq = inv(sqrtm(c)); %whitening the data
x = c*xx;
D = diff(tanh(x)); %setting up newtons method
SD = diff(D);
disp('*****Generating Weighted Matrix*****');
w = randn(n,1); %Random weight vector
w = w/norm(w,2); %unit vector
w0 = randn(n,1);
w0 = w0/norm(w0,2); %unit vector
disp('*****Unmixing Signals*****');
while abs(abs(w0'*w)-1) > size(w,1)
w0 = w;
w = x*D(w'*x) - sum(SD'*(w'*x))*w; %perform ICA
w = w/norm(w, 2);
end
disp('*****Output After ICA*****');
sound(w'*x); % Supposed to be one of the original signals
subplot(4,1,1);plot(m1); title('Original Male Voice');
subplot(4,1,2);plot(f1); title('Original Female Voice');
subplot(4,1,4);plot(w'*x); title('Post ICA: Estimated Signal');
%figure;
%plot(z); title('Random Mixed Signal');
%figure;
%plot(100*(w'*x)); title('Post ICA: Estimated Signal');
Your covariance matrix c is 2 by 2, you cannot work with that. You have to mix your signal multiple times with random numbers to get anywhere, because you must have some signal (m1) common to different channels. I was unable to follow through your code for fast-ICA but here is a PCA example:
url = {'https://www.voiptroubleshooter.com/open_speech/american/OSR_us_000_0034_8k.wav';...
'https://www.voiptroubleshooter.com/open_speech/american/OSR_us_000_0017_8k.wav'};
%fs = 8000;
m1 = webread(url{1});
m1 = m1(1:30000);
f1 = webread(url{2});
f1 = f1(1:30000);
ss = size(f1,1);
n = 2;
disp('*****Mixing Signals*****');
A = randn(50,n); %developing mixing matrix
x = A*[m1';f1']; %A*x
[www,comp] = pca(x');
sound(comp(:,1)',8000)

How to get the output response from a state space equation?

For a state space equation in which matrix A is dependent on variable t(time), how can I get the step or output response?
This is the code, which doesn't work:
A = [sin(t) 0;0 cos(t)];
B = [0.5; 0.0];
C = [1 0; 0 1];
G = ss(A,B,C,[]);
step(G,t)
x0 = [-1;0;2];
initial(G,x0)
Here are error message:
Error using horzcat Dimensions of matrices being concatenated are not
consistent.
Error in Response (line 11) A = [sin(t) 0;0 cos(t)];
As pointed already, you can only use the ss function to generate LTI systems, but you can discretise your model analytically using methods like forward Euler, backwards Euler, Tustin etc. and simulate your model using a for loop.
For your example, you could run something like this:
consider a sampling period h = 0.01 (or lower if the dynamics are not captured properly);
select N as the number of steps for the simulation (100, 1000 etc.);
declare the time instants vector t = (0:N-1)*h;
create a for loop which computes the system states and outputs, here using the forward Euler method (see https://en.wikipedia.org/wiki/Euler_method):
% A = [sin(t) 0;0 cos(t)];
B = [0.5; 0.0];
C = [1 0; 0 1];
D = [0; 0];
x0 = [-1;1]; % the initial state must have two elements as this is a second-order system
u = 0; % constant zero input, but can be modified
N = 1000;
h = 0.01;
t = (0:N-1)*h;
x_vec = [];
y_vec = [];
xk = x0;
yk = [0;0];
for k=1:N
Ad = eye(2)+h*[sin(t(k)) 0; 0 cos(t(k))];
Bd = h*B; % C and D remain the same
yk = C*xk + D*u;
xk = Ad*xk + Bd*u;
x_vec = [x_vec, xk]; % keep results in memory
y_vec = [y_vec, yk];
end
% Plot output response
plot(t,y_vec);

Neural Network Backpropagation Algorithm Implementation

I implemented a Neural Network Back propagation Algorithm in MATLAB, however is is not training correctly. The training data is a matrix X = [x1, x2], dimension 2 x 200 and I have a target matrix T = [target1, target2], dimension 2 x 200. The first 100 columns in T can be [1; -1] for class 1, and the second 100 columns in T can be [-1; 1] for class 2.
theta = 0.1; % criterion to stop
eta = 0.1; % step size
Nh = 10; % number of hidden nodes
For some reason the total training error is always 1.000, it never goes close to the theta, so it runs forever.
I used the following formulas:
The total training error:
The code is well documented below. I would appreciate any help.
clear;
close all;
clc;
%%('---------------------')
%%('Generating dummy data')
%%('---------------------')
d11 = [2;2]*ones(1,70)+2.*randn(2,70);
d12 = [-2;-2]*ones(1,30)+randn(2,30);
d1 = [d11,d12];
d21 = [3;-3]*ones(1,50)+randn([2,50]);
d22 = [-3;3]*ones(1,50)+randn([2,50]);
d2 = [d21,d22];
hw5_1 = d1;
hw5_2 = d2;
save hw5.mat hw5_1 hw5_2
x1 = hw5_1;
x2 = hw5_2;
% step 1: Construct training data matrix X=[x1,x2], dimension 2x200
training_data = [x1, x2];
% step 2: Construct target matrix T=[target1, target2], dimension 2x200
target1 = repmat([1; -1], 1, 100); % class 1
target2 = repmat([-1; 1], 1, 100); % class 2
T = [target1, target2];
% step 3: normalize training data
training_data = training_data - mean(training_data(:));
training_data = training_data / std(training_data(:));
% step 4: specify parameters
theta = 0.1; % criterion to stop
eta = 0.1; % step size
Nh = 10; % number of hidden nodes, actual hidden nodes should be 11 (including a biase)
Ni = 2; % dimension of input vector = number of input nodes, actual input nodes should be 3 (including a biase)
No = 2; % number of class = number of out nodes
% step 5: Initialize the weights
a = -1/sqrt(No);
b = +1/sqrt(No);
inputLayerToHiddenLayerWeight = (b-a).*rand(Ni, Nh) + a
hiddenLayerToOutputLayerWeight = (b-a).*rand(Nh, No) + a
J = inf;
p = 1;
% activation function
% f(net) = a*tanh(b*net),
% f'(net) = a*b*sech2(b*net)
a = 1.716;
b = 2/3;
while J > theta
% step 6: randomly choose one training sample vector from X,
% together with its target vector
k = randi([1, size(training_data, 2)]);
input_X = training_data(:,k);
input_T = T(:,k);
% step 7: Calculate net_j values for hidden nodes in layer 1
% hidden layer output before activation function applied
netj = inputLayerToHiddenLayerWeight' * input_X;
% step 8: Calculate hidden node output Y using activation function
% apply activation function to hidden layer neurons
Y = a*tanh(b*netj);
% step 9: Calculate net_k values for output nodes in layer 2
% output later output before activation function applied
netk = hiddenLayerToOutputLayerWeight' * Y;
% step 10: Calculate output node output Z using the activation function
% apply activation function to the output layer neurons
Z = a*tanh(b*netk);
% step 11: Calculate sensitivity delta_k = (target - Z) * f'(Z)
% find the error between the expected_output and the neuron output
% we got using the weights
% delta_k = (expected - output) * activation(output)
delta_k = [];
for i=1:size(Z)
yi = Z(i,:);
expected_output = input_T(i,:);
delta_k = [delta_k; (expected_output - yi) ...
* a*b*(sech(b*yi)).^2];
end
% step 12: Calculate sensitivity
% delta_j = Sum_k(delta_k * hidden-to-out weights) * f'(net_j)
% error = (weight_k * error_j) * activation(output)
delta_j = [];
for j=1:size(Y)
yi = Y(j,:);
error = 0;
for k=1:size(delta_k)
error = error + delta_k(k,:)*hiddenLayerToOutputLayerWeight(j, k);
end
delta_j = [delta_j; error * (a*b*(sech(b*yi)).^2)];
end
% step 13: update weights
%2x10
inputLayerToHiddenLayerWeight = [];
for i=1:size(input_X)
xi = input_X(i,:);
wji = [];
for j=1:size(delta_j)
wji = [wji, eta * xi * delta_j(j,:)];
end
inputLayerToHiddenLayerWeight = [inputLayerToHiddenLayerWeight; wji];
end
inputLayerToHiddenLayerWeight
%10x2
hiddenLayerToOutputLayerWeight = [];
for j=1:size(Y)
yi = Y(j,:);
wjk = [];
for k=1:size(delta_k)
wjk = [wjk, eta * delta_k(k,:) * yi];
end
hiddenLayerToOutputLayerWeight = [hiddenLayerToOutputLayerWeight; wjk];
end
hiddenLayerToOutputLayerWeight
% Mean Square Error
J = 0;
for j=1:size(training_data, 2)
X = training_data(:,j);
t = T(:,j);
netj = inputLayerToHiddenLayerWeight' * X;
Y = a*tanh(b*netj);
netk = hiddenLayerToOutputLayerWeight' * Y;
Z = a*tanh(b*netk);
J = J + immse(t, Z);
end
J = J/size(training_data, 2)
p = p + 1;
if p == 4
break;
end
end
% testing neural network using the inputs
test_data = [[2; -2], [-3; -3], [-2; 5], [3; -4]];
for i=1:size(test_data, 2)
end
Weight decay isn't essential for Neural Network training.
What I did notice was that your feature normalization wasn't correct.
The correct algorthim for scaling data to the range of 0 to 1 is
(max - x) / (max - min)
Note: you apply this for every element within the array (or vector). Data inputs for NN need to be within the range of [0,1]. (Technically they can be a little bit outside of that ~[-3,3] but values furthur from 0 make training difficult)
edit*
I am unaware of this activation function
a = 1.716;
b = 2/3;
% f(net) = a*tanh(b*net),
% f'(net) = a*b*sech2(b*net)
It sems like a variation on tanh.
Could you elaborate what it is?
If you're net still doesn't work give me an update and I'll look at your code more closely.

Mel-frequency function: error with matrix dimensions

I'm trying to make a prototype audio recognition system by following this link: http://www.ifp.illinois.edu/~minhdo/teaching/speaker_recognition/. It is quite straightforward so there is almost nothing to worry about. But my problem is with the mel-frequency function. Here is the code as provided on the website:
function m = melfb(p, n, fs)
% MELFB Determine matrix for a mel-spaced filterbank
%
% Inputs: p number of filters in filterbank
% n length of fft
% fs sample rate in Hz
%
% Outputs: x a (sparse) matrix containing the filterbank amplitudes
% size(x) = [p, 1+floor(n/2)]
%
% Usage: For example, to compute the mel-scale spectrum of a
% colum-vector signal s, with length n and sample rate fs:
%
% f = fft(s);
% m = melfb(p, n, fs);
% n2 = 1 + floor(n/2);
% z = m * abs(f(1:n2)).^2;
%
% z would contain p samples of the desired mel-scale spectrum
%
% To plot filterbanks e.g.:
%
% plot(linspace(0, (12500/2), 129), melfb(20, 256, 12500)'),
% title('Mel-spaced filterbank'), xlabel('Frequency (Hz)');
f0 = 700 / fs;
fn2 = floor(n/2);
lr = log(1 + 0.5/f0) / (p+1);
% convert to fft bin numbers with 0 for DC term
bl = n * (f0 * (exp([0 1 p p+1] * lr) - 1));
b1 = floor(bl(1)) + 1;
b2 = ceil(bl(2));
b3 = floor(bl(3));
b4 = min(fn2, ceil(bl(4))) - 1;
pf = log(1 + (b1:b4)/n/f0) / lr;
fp = floor(pf);
pm = pf - fp;
r = [fp(b2:b4) 1+fp(1:b3)];
c = [b2:b4 1:b3] + 1;
v = 2 * [1-pm(b2:b4) pm(1:b3)];
m = sparse(r, c, v, p, 1+fn2);
But it gave me an error:
Error using * Inner matrix dimensions must agree.
Error in MFFC (line 17) z = m * abs(f(1:n2)).^2;
When I include these 2 lines just before line 17:
size(m)
size(abs(f(1:n2)).^2)
It gave me :
ans =
20 65
ans =
1 65
So should I transpose the second matrix? Or should I interpret this as an row-wise multiplication and modify the code?
Edit: Here is the main function (I simply run MFCC()):
function result = MFFC()
[y Fs] = audioread('s1.wav');
% sound(y,Fs)
Frames = Frame_Blocking(y,128);
Windowed = Windowing(Frames);
spectrum = FFT_After_Windowing(Windowed);
%imagesc(mag2db(abs(spectrum)))
p = 20;
S = size(spectrum);
n = S(2);
f = spectrum;
m = melfb(p, n, Fs);
n2 = 1 + floor(n/2);
size(m)
size(abs(f(1:n2)).^2)
z = m * abs(f(1:n2)).^2;
result = z;
And here are the auxiliary functions:
function f = Frame_Blocking(y,N)
% Parameters: M = 100, N = 256
% Default : M = 100; N = 256;
M = fix(N/3);
Frames = [];
first = 1; last = N;
len = length(y);
while last <= len
Frames = [Frames; y(first:last)'];
first = first + M;
last = last + M;
end;
if last < len
first = first + M;
Frames = [Frames; y(first : len)];
end
f = Frames;
function f = Windowing(Frames)
S = size(Frames);
N = S(2);
M = S(1);
Windowed = zeros(M,N);
nn = 1:N;
wn = 0.54 - 0.46*cos(2*pi/(N-1)*(nn-1));
for ii = 1:M
Windowed(ii,:) = Frames(ii,:).*wn;
end;
f = Windowed;
function f = FFT_After_Windowing(Windowed)
spectrum = fft(Windowed);
f = spectrum;
Transpose s or transpose the resulting f (it's just a matter of convention).
There is nothing wrong with the melfb function you are using, merely with the dimensions of the signal in the example you are trying to run (in the commented lines 14-17).
% f = fft(s);
% m = melfb(p, n, fs);
% n2 = 1 + floor(n/2);
% z = m * abs(f(1:n2)).^2;
The example assumes that you are using a "colum-vector signal s". From the size of your Fourier transformed f (done via fft which respects the input signal dimensions) your input signal s is a row-vector signal.
The part that gives you the error is the actual filtering operation that requires multiplying a p x n2 matrix with a n2 x 1 column-vector (i.e., each filter's response is multiplied pointwise with the Fourier of the input signal). Since your input s is 1 x n, your f will be 1 x n and the final matrix to vector multiplication for z will give an error.
Thanks to gevang's anwer, I was able to find out my mistake. Here is how I modified the code:
function result = MFFC()
[y Fs] = audioread('s2.wav');
% sound(y,Fs)
Frames = Frame_Blocking(y,128);
Windowed = Windowing(Frames);
%spectrum = FFT_After_Windowing(Windowed');
%imagesc(mag2db(abs(spectrum)))
p = 20;
%S = size(spectrum);
%n = S(2);
%f = spectrum;
S1 = size(Windowed);
n = S1(2);
n2 = 1 + floor(n/2);
%z = zeros(S1(1),n2);
z = zeros(20,S1(1));
for ii=1: S1(1)
s = (FFT_After_Windowing(Windowed(ii,:)'));
f = fft(s);
m = melfb(p,n,Fs);
% n2 = 1 + floor(n/2);
z(:,ii) = m * abs(f(1:n2)).^2;
end;
%f = FFT_After_Windowing(Windowed');
%S = size(f);
%n = S(2);
%size(f)
%m = melfb(p, n, Fs);
%n2 = 1 + floor(n/2);
%size(m)
%size(abs(f(1:n2)).^2)
%z = m * abs(f(1:n2)).^2;
result = z;
As you can see, I naively assumed that the function deals with row-wise matrices, but in fact it deals with column vectors (and maybe column-wise matrices). So I iterate through each column of the input matrix and then combine the results.
But I don't think this is efficient and vectorized code. Also I still can't figure out how to do column-wise operations on the input matrix (Windowed - after the windowing step), instead of using a loop.

Can't recover the parameters of a model using ode45

I am trying to simulate the rotation dynamics of a system. I am testing my code to verify that it's working using simulation, but I never recovered the parameters I pass to the model. In other words, I can't re-estimate the parameters I chose for the model.
I am using MATLAB for that and specifically ode45. Here is my code:
% Load the input-output data
[torque outputs] = DataLogs2();
u = torque;
% using the simulation data
Ixx = 1.00;
Iyy = 2.00;
Izz = 3.00;
x0 = [0; 0; 0];
Ts = .02;
t = 0:Ts:Ts * ( length(u) - 1 );
[ T, x ] = ode45( #(t,x) rotationDyn( t, x, u(1+floor(t/Ts),:), Ixx, Iyy, Izz), t, x0 );
w = x';
N = length(w);
q = 1; % a counter for the A and B matrices
% The Algorithm
for k=1:1:N
w_telda = [ 0 -w(3, k) w(2,k); ...
w(3,k) 0 -w(1,k); ...
-w(2,k) w(1,k) 0 ];
if k == N % to handle the problem of the last iteration
w_dash(:,k) = (-w(:,k))/Ts;
else
w_dash(:,k) = (w(:,k+1)-w(:,k))/Ts;
end
a = kron( w_dash(:,k)', eye(3) ) + kron( w(:,k)', w_telda );
A(q:q+2,:) = a; % a 3N*9 matrix
B(q:q+2,:) = u(k,:)'; % a 3N*1 matrix % u(:,k)
q = q + 3;
end
% Forcing J to be diagonal. This is the case when we consider our quadcopter as two thin uniform
% rods crossed at the origin with a point mass (motor) at the end of each.
A_new = [A(:, 1) A(:, 5) A(:, 9)];
vec_J_diag = A_new\B;
J_diag = diag([vec_J_diag(1), vec_J_diag(2), vec_J_diag(3)])
eigenvalues_J_diag = eig(J_diag)
error = norm(A_new*vec_J_diag - B)
where my dynamic model is defined as:
function [dw, y] = rotationDyn(t, w, tau, Ixx, Iyy, Izz, varargin)
% The output equation
y = [w(1); w(2); w(3)];
% State equation
% dw = (I^-1)*( tau - cross(w, I*w) );
dw = [Ixx^-1 * tau(1) - ((Izz-Iyy)/Ixx)*w(2)*w(3);
Iyy^-1 * tau(2) - ((Ixx-Izz)/Iyy)*w(1)*w(3);
Izz^-1 * tau(3) - ((Iyy-Ixx)/Izz)*w(1)*w(2)];
end
Practically, what this code should do, is to calculate the eigenvalues of the inertia matrix, J, i.e. to recover Ixx, Iyy, and Izz that I passed to the model at the very begining (1, 2 and 3), but all what I get is wrong results.
Is the problem with using ode45?
Well the problem wasn't in the ode45 instruction, the problem is that in system identification one can create an n-1 samples-signal from an n samples-signal, thus the loop has to end at N-1 in the above code.