Finding Percent Error of a Fourier Series - matlab

Find the error as a function of n, where the error is defined as the difference between two the voltage from the Fourier series (vF (t)) and the value from the ideal function (v(t)), normalized to the maximum magnitude (Vm ):
I am given this prompt where Vm = 1 V. Below this line is the code which I have written.
I am trying to write a function to solve this question: Plot the error versus time for n=3,n=5,n=10, and n=50. (10points). What does it look like I am doing incorrectly?
clc;
close all;
clear all;
% define the signal parameters
Vm = 1;
T = 1;
w0 = 2*pi/T;
% define the symbolic variables
syms n t;
% define the signal
v1 = Vm*sin(4*pi*t/T);
v2 = 2*Vm*sin(4*pi*t/T);
% evaluate the fourier series integral
an1 = 2/T*int(v1*cos(n*w0*t),0,T/2) + 2/T*int(v2*cos(n*w0*t),T/2,T);
bn1 = 2/T*int(v1*sin(n*w0*t),0,T/2) + 2/T*int(v2*sin(n*w0*t),T/2,T);
a0 = 1/T*int(v1,0,T/2) + 1/T*int(v2,T/2,T);
% obtain C by substituting n in c[n]
nmax = 100;
n = 1:nmax;
a = subs(an1);
b = subs(bn1);
% define the time vector
ts = 1e-2; % ts is sampling the
t = 0:ts:3*T-ts;
% directly plot the signal x(t)
t1 = 0:ts:T-ts;
v1 = Vm*sin(4*pi*t1/T).*(t1<=T/2);
v2 = 2*Vm*sin(4*pi*t1/T).*(t1>T/2).*(t1<T);
v = v1+v2;
x = repmat(v,1,3);
% Now fourier series reconstruction
N = [3];
for p = 1:length(N)
for i = 1:length(t)
for k = N(p)
x(k,i) = a(k)*cos(k*w0*t(i)) + b(k)*sin(k*w0*t(i));
end
% y(k,i) = a0+sum(x(:,i)); % Add DC term
end
end
z = a0 + sum(x);
figure(1);
plot(t,z);
%Percent error
function [per_error] = percent_error(measured, actual)
per_error = abs(( (measured - actual) ./ 1) * 100);
end

The purpose of the forum is helping with specific technical questions, not doing your homework.

Related

Fourier series not plotting correct amplitude

I am trying to plot a 10% duty cycle square wave using matlab, but for some reason the amplitude of the series changes in a unpredictable way for different values of N. I was expecting the amplitude to be 1 (i.e. from [-1;1]. I am not sure what to change?
% Assignment of variables
syms t
[enter image description here][1]
% Function variables
N = 5;
T0 = 1;
w0 = 2*pi/T0;
Imin1 = 0;
Imax1 = 0.1;
% If square wave with mean at 0
bool = 1;
Imin2 = 0.1;
Imax2 = 1;
% Function
ft = 1;
% First term calculation
a0 = (1/T0)*int(ft, t, Imin1, Imax1) + (bool)*(1/T0)*int((-ft), t, Imin1, Imax1);
y = a0;
% Calculation of n terms
for n = 1:N
an = (2/T0)*int(ft*cos(n*w0*t), t, Imin1, Imax1) + (bool)*(2/T0)*int((-ft)*cos(n*w0*t), t, Imin2, Imax2);
bn = (2/T0)*int(ft*sin(n*w0*t), t, Imin1, Imax1) + (bool)*(2/T0)*int((-ft)*sin(n*w0*t), t, Imin2, Imax2);
y = y + an*cos(n*w0*t) + bn*sin(n*w0*t);
end
fplot(y, [0,4], "Black")

Plotting the results of a Newton-Raphson solution for multiple cases

Consider the following problem:
I am now in the third part of this question. I wrote the vectorial loop equations (q=teta2, x=teta3 and y=teta4):
fval(1,1) = r2*cos(q)+r3*cos(x)-r4*cos(y)-r1;
fval(2,1) = r2*sin(q)+r3*sin(x)-r4*sin(y);
I have these 2 functions, and all variables except x and y are given. I found the roots with help of this video.
Now I need to plot graphs of q versus x and q versus y when q is at [0,2pi] with delta q of 2.5 degree. What should I do to plot the graphs?
Below is my attempt so far:
function [fval,jac] = lorenzSystem(X)
%Define variables
x = X(1);
y = X(2);
q = pi/2;
r2 = 15
r3 = 50
r4 = 45
r1 = 40
%Define f(x)
fval(1,1)=r2*cos(q)+r3*cos(x)-r4*cos(y)-r1;
fval(2,1)=r2*sin(q)+r3*sin(x)-r4*sin(y);
%Define Jacobian
jac = [-r3*sin(X(1)), r4*sin(X(2));
r3*cos(X(1)), -r4*cos(X(2))];
%% Multivariate NR
%Initial conditions:
X0 = [0.5;1];
maxIter = 50;
tolX = 1e-6;
X = X0;
Xold = X0;
for i = 1:maxIter
[f,j] = lorenzSystem(X);
X = X - inv(j)*f;
err(:,i) = abs(X-Xold);
Xold = X;
if (err(:,i)<tolX)
break;
end
end
Please take a look at my solution below, and study how it differs from your own.
function [th2,th3,th4] = q65270276()
[th2,th3,th4] = lorenzSystem();
hF = figure(); hAx = axes(hF);
plot(hAx, deg2rad(th2), deg2rad(th3), deg2rad(th2), deg2rad(th4));
xlabel(hAx, '\theta_2')
xticks(hAx, 0:pi/3:2*pi);
xticklabels(hAx, {'$0$','$\frac{\pi}{3}$','$\frac{2\pi}{3}$','$\pi$','$\frac{4\pi}{3}$','$\frac{5\pi}{3}$','$2\pi$'});
hAx.TickLabelInterpreter = 'latex';
yticks(hAx, 0:pi/6:pi);
yticklabels(hAx, {'$0$','$\frac{\pi}{6}$','$\frac{\pi}{3}$','$\frac{\pi}{2}$','$\frac{2\pi}{3}$','$\frac{5\pi}{6}$','$\pi$'});
set(hAx, 'XLim', [0 2*pi], 'YLim', [0 pi], 'FontSize', 16);
grid(hAx, 'on');
legend(hAx, '\theta_3', '\theta_4')
end
function [th2,th3,th4] = lorenzSystem()
th2 = (0:2.5:360).';
[th3,th4] = deal(zeros(size(th2)));
% Define geometry:
r1 = 40;
r2 = 15;
r3 = 50;
r4 = 45;
% Define the residual:
res = #(q,X)[r2*cosd(q)+r3*cosd(X(1))-r4*cosd(X(2))-r1; ... Δx=0
r2*sind(q)+r3*sind(X(1))-r4*sind(X(2))]; % Δy=0
% Define the Jacobian:
J = #(X)[-r3*sind(X(1)), r4*sind(X(2));
r3*cosd(X(1)), -r4*cosd(X(2))];
X0 = [acosd((45^2-25^2-50^2)/(-2*25*50)); 180-acosd((50^2-25^2-45^2)/(-2*25*45))]; % Accurate guess
maxIter = 500;
tolX = 1e-6;
for idx = 1:numel(th2)
X = X0;
Xold = X0;
err = zeros(maxIter, 1); % Preallocation
for it = 1:maxIter
% Update the guess
f = res( th2(idx), Xold );
X = Xold - J(Xold) \ f;
% X = X - pinv(J(X)) * res( q(idx), X ); % May help when J(X) is close to singular
% Determine convergence
err(it) = (X-Xold).' * (X-Xold);
if err(it) < tolX
break
end
% Update history
Xold = X;
end
% Unpack and store θ₃, θ₄
th3(idx) = X(1);
th4(idx) = X(2);
% Update X0 for faster convergence of the next case:
X0 = X;
end
end
Several notes:
All computations are performed in degrees.
The specific plotting code I used is less interesting, what matters is that I defined all θ₂ in advance, then looped over them to find θ₃ and θ₄ (without recursion, as was done in your own implementation).
The initial guess (actually, analytical solution) for the very first case (θ₂=0) can be found by solving the problem manually (i.e. "on paper") using the law of cosines. The solver also works for other guesses, but you might need to increase maxIter. Also, for certain guesses (e.g. X(1)==X(2)), the Jacobian is ill-conditioned, in which case you can use pinv.
If my computation is correct, this is the result:

Fast approach in matlab to estimate linear regression with AR terms

I am trying to estimate regression and AR parameters for (loads of) linear regressions with AR error terms. (You could also think of this as a MA process with exogenous variables):
, where
, with lags of length p
I am following the official matlab recommendations and use regArima to set up a number of regressions and extract regression and AR parameters (see reproducible example below).
The problem: regArima is slow! For 5 regressions, matlab needs 14.24sec. And I intend to run a large number of different regression models. Is there any quicker method around?
y = rand(100,1);
r2 = rand(100,1);
r3 = rand(100,1);
r4 = rand(100,1);
r5 = rand(100,1);
exo = [r2 r3 r4 r5];
tic
for p = 0:4
Mdl = regARIMA(3,0,0);
[EstMdl, ~, LogL] = estimate(Mdl,y,'X',exo,'Display','off');
end
toc
Unlike the regArima function which uses Maximum Likelihood, the Cochrane-Orcutt prodecure relies on an iteration of OLS regression. There are a few more particularities when this approach is valid (refer to the link posted). But for the aim of this question, the appraoch is valid, and fast!
I modified James Le Sage's code which covers only AR lags of order 1, to cover lags of order p.
function result = olsc(y,x,arterms)
% PURPOSE: computes Cochrane-Orcutt ols Regression for AR1 errors
%---------------------------------------------------
% USAGE: results = olsc(y,x)
% where: y = dependent variable vector (nobs x 1)
% x = independent variables matrix (nobs x nvar)
%---------------------------------------------------
% RETURNS: a structure
% results.meth = 'olsc'
% results.beta = bhat estimates
% results.rho = rho estimate
% results.tstat = t-stats
% results.trho = t-statistic for rho estimate
% results.yhat = yhat
% results.resid = residuals
% results.sige = e'*e/(n-k)
% results.rsqr = rsquared
% results.rbar = rbar-squared
% results.iter = niter x 3 matrix of [rho converg iteration#]
% results.nobs = nobs
% results.nvar = nvars
% results.y = y data vector
% --------------------------------------------------
% SEE ALSO: prt_reg(results), plt_reg(results)
%---------------------------------------------------
% written by:
% James P. LeSage, Dept of Economics
% University of Toledo
% 2801 W. Bancroft St,
% Toledo, OH 43606
% jpl#jpl.econ.utoledo.edu
% do error checking on inputs
if (nargin ~= 3); error('Wrong # of arguments to olsc'); end;
[nobs nvar] = size(x);
[nobs2 junk] = size(y);
if (nobs ~= nobs2); error('x and y must have same # obs in olsc'); end;
% ----- setup parameters
ITERMAX = 100;
converg = 1.0;
rho = zeros(arterms,1);
iter = 1;
% xtmp = lag(x,1);
% ytmp = lag(y,1);
% truncate 1st observation to feed the lag
% xlag = x(1:nobs-1,:);
% ylag = y(1:nobs-1,1);
yt = y(1+arterms:nobs,1);
xt = x(1+arterms:nobs,:);
xlag = zeros(nobs-arterms,arterms);
for tt = 1 : arterms
xlag(:,nvar*(tt-1)+1:nvar*(tt-1)+nvar) = x(arterms-tt+1:nobs-tt,:);
end
ylag = zeros(nobs-arterms,arterms);
for tt = 1 : arterms
ylag(:,tt) = y(arterms-tt+1:nobs-tt,:);
end
% setup storage for iteration results
iterout = zeros(ITERMAX,3);
while (converg > 0.0001) & (iter < ITERMAX),
% step 1, using intial rho = 0, do OLS to get bhat
ystar = yt - ylag*rho;
xstar = zeros(nobs-arterms,nvar);
for ii = 1 : nvar
tmp = zeros(1,arterms);
for tt = 1:arterms
tmp(1,tt)=ii+nvar*(tt-1);
end
xstar(:,ii) = xt(:,ii) - xlag(:,tmp)*rho;
end
beta = (xstar'*xstar)\xstar' * ystar;
e = y - x*beta;
% truncate 1st observation to account for the lag
et = e(1+arterms:nobs,1);
elagt = zeros(nobs-arterms,arterms);
for tt = 1 : arterms
elagt(:,tt) = e(arterms-tt+1:nobs-tt,:);
end
% step 2, update estimate of rho using residuals
% from step 1
res_rho = (elagt'*elagt)\elagt' * et;
rho_last = rho;
rho = res_rho;
converg = sum(abs(rho - rho_last));
% iterout(iter,1) = rho;
iterout(iter,2) = converg;
iterout(iter,3) = iter;
iter = iter + 1;
end; % end of while loop
if iter == ITERMAX
% error('ols_corc did not converge in 100 iterations');
print('ols_corc did not converge in 100 iterations');
end;
result.iter= iterout(1:iter-1,:);
% after convergence produce a final set of estimates using rho-value
ystar = yt - ylag*rho;
xstar = zeros(nobs-arterms,nvar);
for ii = 1 : nvar
tmp = zeros(1,arterms);
for tt = 1:arterms
tmp(1,tt)=ii+nvar*(tt-1);
end
xstar(:,ii) = xt(:,ii) - xlag(:,tmp)*rho;
end
result.beta = (xstar'*xstar)\xstar' * ystar;
e = y - x*result.beta;
et = e(1+arterms:nobs,1);
elagt = zeros(nobs-arterms,arterms);
for tt = 1 : arterms
elagt(:,tt) = e(arterms-tt+1:nobs-tt,:);
end
u = et - elagt*rho;
result.vare = std(u)^2;
result.meth = 'olsc';
result.rho = rho;
result.iter = iterout(1:iter-1,:);
% % compute t-statistic for rho
% varrho = (1-rho*rho)/(nobs-2);
% result.trho = rho/sqrt(varrho);
(I did not adapt in the last 2 lines the t-test for rho vectors of length p, but this should be straight forward to do..)

Can't recover the parameters of a model using ode45

I am trying to simulate the rotation dynamics of a system. I am testing my code to verify that it's working using simulation, but I never recovered the parameters I pass to the model. In other words, I can't re-estimate the parameters I chose for the model.
I am using MATLAB for that and specifically ode45. Here is my code:
% Load the input-output data
[torque outputs] = DataLogs2();
u = torque;
% using the simulation data
Ixx = 1.00;
Iyy = 2.00;
Izz = 3.00;
x0 = [0; 0; 0];
Ts = .02;
t = 0:Ts:Ts * ( length(u) - 1 );
[ T, x ] = ode45( #(t,x) rotationDyn( t, x, u(1+floor(t/Ts),:), Ixx, Iyy, Izz), t, x0 );
w = x';
N = length(w);
q = 1; % a counter for the A and B matrices
% The Algorithm
for k=1:1:N
w_telda = [ 0 -w(3, k) w(2,k); ...
w(3,k) 0 -w(1,k); ...
-w(2,k) w(1,k) 0 ];
if k == N % to handle the problem of the last iteration
w_dash(:,k) = (-w(:,k))/Ts;
else
w_dash(:,k) = (w(:,k+1)-w(:,k))/Ts;
end
a = kron( w_dash(:,k)', eye(3) ) + kron( w(:,k)', w_telda );
A(q:q+2,:) = a; % a 3N*9 matrix
B(q:q+2,:) = u(k,:)'; % a 3N*1 matrix % u(:,k)
q = q + 3;
end
% Forcing J to be diagonal. This is the case when we consider our quadcopter as two thin uniform
% rods crossed at the origin with a point mass (motor) at the end of each.
A_new = [A(:, 1) A(:, 5) A(:, 9)];
vec_J_diag = A_new\B;
J_diag = diag([vec_J_diag(1), vec_J_diag(2), vec_J_diag(3)])
eigenvalues_J_diag = eig(J_diag)
error = norm(A_new*vec_J_diag - B)
where my dynamic model is defined as:
function [dw, y] = rotationDyn(t, w, tau, Ixx, Iyy, Izz, varargin)
% The output equation
y = [w(1); w(2); w(3)];
% State equation
% dw = (I^-1)*( tau - cross(w, I*w) );
dw = [Ixx^-1 * tau(1) - ((Izz-Iyy)/Ixx)*w(2)*w(3);
Iyy^-1 * tau(2) - ((Ixx-Izz)/Iyy)*w(1)*w(3);
Izz^-1 * tau(3) - ((Iyy-Ixx)/Izz)*w(1)*w(2)];
end
Practically, what this code should do, is to calculate the eigenvalues of the inertia matrix, J, i.e. to recover Ixx, Iyy, and Izz that I passed to the model at the very begining (1, 2 and 3), but all what I get is wrong results.
Is the problem with using ode45?
Well the problem wasn't in the ode45 instruction, the problem is that in system identification one can create an n-1 samples-signal from an n samples-signal, thus the loop has to end at N-1 in the above code.

ode45 solving of diff.equation with further fitting to exp.results

I am building a code to solve a diff. equation:
function dy = KIN1PARM(t,y,k)
%
% version : first order reaction
% A --> B
% dA/dt = -k*A
% integrated form A = A0*exp(-k*t)
%
dy = -k.*y;
end
I want this equation to be solved numerically and the results (y as a function of t, and k) to be used for minimization with respect to the experimental values to get the optimal value of parameter k.
function SSE = SSE_minimization_1parm(tspan_inp,val_exp,k_inp,y0_inp)
f = #(Tt,Ty) KIN1PARM(Tt,Ty,k_inp); %function to call ode45
size_limit = length(y0_inp);
options = odeset('NonNegative',1:size_limit,'RelTol',1e-4,'AbsTol', 1e-4);
[ts,val_theo] = ode45(f, tspan_inp, y0_inp,options); %Cexp is the state variable predicted by the model
err = val_exp - val_theo;
SSE = sum(err.^2); %sum squared-error
The main code to plot the experimental and calculated data is:
% Analyzing first order kinetics
clear all; clc;
figure_title = 'Experimental Data';
label_abscissa = 'Time [s]';
label_ordinatus = 'Concentration [mol/L]';
%
abscissa = [ 0;
240;
480;
720;
960;
1140;
1380;
1620;
1800;
2040;
2220;
2460;
2700;
2940];
ordinatus = [ 0;
19.6;
36.7;
49.0;
57.1;
64.5;
71.4;
75.2;
78.7;
81.3;
83.3;
85.5;
87.0;
87.7];
%
title_string = [' Time [s]', ' | ', ' Complex [mol/L] ', ' '];
disp(title_string);
for i=1:length(abscissa)
report_raw_data{i} = sprintf('%1.3E\t',abscissa(i),ordinatus(i));
disp([report_raw_data{i}]);
end;
%---------------------/plotting dot data/-------------------------------------
%
f = figure('Position', [100 100 700 500]);
title(figure_title,'FontName','arial','FontWeight','bold', 'FontSize', 12);
xlabel(label_abscissa, 'FontSize', 12);
ylabel(label_ordinatus, 'FontSize', 12);
%
grid on; hold on;
%
marker_style = { 's'};
%
plot(abscissa,ordinatus, marker_style{1},...
'MarkerFaceColor', 'black',...
'MarkerEdgeColor', 'black',...
'MarkerSize',4);
%---------------------/Analyzing/----------------------------------------
%
options = optimset('Display','iter','TolFun',1e-4,'TolX',1e-4);
%
CPUtime0 = cputime;
Time_M = abscissa;
Concentration_M = ordinatus;
tspan = Time_M;
y0 = 0;
k0 = rand(1);
[k, fval, exitflag, output] = fminsearch(#(k) SSE_minimization_1parm(tspan,Concentration_M,k,y0),k0,options);
CPUtimex = cputime;
CPUtime_delay = CPUtimex - CPUtime0;
%
%---------------------/plotting calculated data/-------------------------------------
%
xupperlimit = Time_M(length(Time_M));
xval = ([0:1:xupperlimit])';
%
yvector = data4plot_1parm(xval,k,y0);
plot(xval,yvector, 'r');
hold on;
%---------------------/printing calculated data/-------------------------------------
%
disp('RESULTS:');
disp(['CPU time: ',sprintf('%0.5f\t',CPUtime_delay),' sec']);
disp(['k: ',sprintf('%1.3E\t',k')]);
disp(['fval: ',sprintf('%1.3E\t',fval)]);
disp(['exitflag: ',sprintf('%1.3E\t',exitflag)]);
disp(output);
disp(['Output: ',output.message]);
The corresponding function, which uses the optimized parameter k to yield the calculated y = f(t) data :
function val = data4plot_1parm(tspan_inp,k_inp,y0_inp)
f = #(Tt,Ty) KIN1PARM(Tt,Ty,k_inp);
size_limit = length(y0_inp);
options = odeset('NonNegative',1:size_limit,'RelTol',1e-4,'AbsTol',1e-4);
[ts,val_theo] = ode45(f, tspan_inp, y0_inp, options);
The code runs optimization cycles always giving different values of parameter k, which are different from the value calculated using ln(y) vs t (should be around 7.0e-4 for that series of exp. data).
Looking at the outcome of the ode solver (SSE_minimization_1parm => val_theo) I found that the ode function gives me a vector of zeroes.
Could someone help me , please, to figure out what's going with the ode solver ?
Thanks much in advance !
So here comes the best which I can get right now. For my way I tread ordinatus values as time and the abscissa values as measured quantity which you try to model. Also, you seem to have set alot of options for the solver, which I all omitted. First comes your proposed solution using ode45(), but with a non-zero y0 = 100, which I just "guessed" from looking at the data (in a semilogarithmic plot).
function main
abscissa = [0;
240;
480;
720;
960;
1140;
1380;
1620;
1800;
2040;
2220;
2460;
2700;
2940];
ordinatus = [ 0;
19.6;
36.7;
49.0;
57.1;
64.5;
71.4;
75.2;
78.7;
81.3;
83.3;
85.5;
87.0;
87.7];
tspan = [min(ordinatus), max(ordinatus)]; % // assuming ordinatus is time
y0 = 100; % // <---- Probably the most important parameter to guess
k0 = -0.1; % // <--- second most important parameter to guess (negative for growth)
k_opt = fminsearch(#minimize, k0) % // optimization only over k
% nested minimization function
function e = minimize(k)
sol = ode45(#KIN1PARM, tspan, y0, [], k);
y_hat = deval(sol, ordinatus); % // evaluate solution at given times
e = sum((y_hat' - abscissa).^2); % // compute squarederror
end
% // plot with optimal parameter
[T,Y] = ode45(#KIN1PARM, tspan, y0, [], k_opt);
figure
plot(ordinatus, abscissa,'ko', 'markersize',10,'markerfacecolor','black')
hold on
plot(T,Y, 'r--', 'linewidth', 2)
% // Another attempt with fminsearch and the integral form
t = ordinatus;
t_fit = linspace(min(ordinatus), max(ordinatus))
y = abscissa;
% create model function with parameters A0 = p(1) and k = p(2)
model = #(p, t) p(1)*exp(-p(2)*t);
e = #(p) sum((y - model(p, t)).^2); % minimize squared errors
p0 = [100, -0.1]; % an initial guess (positive A0 and probably negative k for exp. growth)
p_fit = fminsearch(e, p0); % Optimize
% Add to plot
plot(t_fit, model(p_fit, t_fit), 'b-', 'linewidth', 2)
legend('location', 'best', 'data', 'ode45 with fixed y0', ...
sprintf ('integral form: %5.1f*exp(-%.4f)', p_fit))
end
function dy = KIN1PARM(t,y,k)
%
% version : first order reaction
% A --> B
% dA/dt = -k*A
% integrated form A = A0*exp(-k*t)
%
dy = -k.*y;
end
The result can be seen below. Quit surprisingly to me, the initial guess of y0 = 100 fits quite well with the optimal A0 found. The result can be seen below: