I have 2 features which I expand to contain all possible combinations of the two features under order 6. When I do MATLAB's fminunc, it returns a weight vector where all elements are 0.
The dataset is here
clear all;
clc;
data = load("P2-data1.txt");
m = length(data);
para = 0; % regularization parameter
%% Augment Feature
y = data(:,3);
new_data = newfeature(data(:,1), data(:,2), 3);
[~, n] = size(new_data);
betas1 = zeros(n,1); % initial weights
options = optimset('GradObj', 'on', 'MaxIter', 400);
[beta_new, cost] = fminunc(#(t)(regucostfunction(t, new_data, y, para)), betas1, options);
fprintf('Cost at theta found by fminunc: %f\n', cost);
fprintf('theta: \n');
fprintf(' %f \n', beta_new); % get all 0 here
% Compute accuracy on our training set
p_new = predict(beta_new, new_data);
fprintf('Train Accuracy after feature augmentation: %f\n', mean(double(p_new == y)) * 100);
fprintf('\n');
%% the functions are defined below
function g = sigmoid(z) % running properly
g = zeros(size(z));
g=ones(size(z))./(ones(size(z))+exp(-z));
end
function [J,grad] = regucostfunction(theta,x,y,para) % CalculateCost(x1,betas1,y);
m = length(y); % number of training examples
J = 0;
grad = zeros(size(theta));
hyp = sigmoid(x*theta);
err = (hyp - y)';
grad = (1/m)*(err)*x;
sum = 0;
for k = 2:length(theta)
sum = sum+theta(k)^2;
end
J = (1/m)*((-y' * log(hyp) - (1 - y)' * log(1 - hyp)) + para*(sum) );
end
function p = predict(theta, X)
m = size(X, 1); % Number of training examples
p = zeros(m, 1);
index = find(sigmoid(theta'*X') >= 0.5);
p(index,1) = 1;
end
function out = newfeature(X1, X2, degree)
out = ones(size(X1(:,1)));
for i = 1:degree
for j = 0:i
out(:, end+1) = (X1.^(i-j)).*(X2.^j);
end
end
end
data contains 2 columns of rows followed by a third column of 0/1 values.
The functions used are: newfeature returns the expanded features and regucostfunction computes the cost. When I did the same approach with the default features, it worked and I think the problem here has to do with some coding issue.
Related
I need to run the K-Means algorithm on the key points of the Sift algorithm in MATLAB .I want to cluster the key points in the image but I do not know how to do it.
First, put the key points into X with x coordinates in the first column and y coordinates in the second column like this
X=[reshape(keypxcoord,numel(keypxcoord),1),reshape(keypycoord,numel(keypycoord),1))]
Then if you have the statistical toolbox, you can just use the built in 'kmeans' function lik this
output = kmeans(X,num_clusters)
Otherwise, write your own kmeans function:
function [ min_group, mu ] = mykmeans( X,K )
%MYKMEANS
% X = N obervations of D element vectors
% K = number of centroids
assert(K > 0);
D = size(X,1); %No. of r.v.
N = size(X,2); %No. of observations
group_size = zeros(1,K);
min_group = zeros(1,N);
step = 0;
%% init centroids
mu = kpp(X,K);
%% 2-phase iterative approach (local then global)
while step < 400
%% phase 1, batch update
old_group = min_group;
% computing distances
d2 = distances2(X,mu);
% reassignment all points to closest centroids
[~, min_group] = min(d2,[],1);
% recomputing centroids (K number of means)
for k = 1 : K
group_size(k) = sum(min_group==k);
% check empty group
%if group_size(k) == 0
assert(group_size(k)>0);
%else
mu(:,k) = sum(X(:,min_group==k),2)/group_size(k);
%end
end
changed = sum(min_group ~= old_group);
p1_converged = changed <= N*0.001;
%% phase 2, individual update
changed = 0;
for n = 1 : N
d2 = distances2(X(:,n),mu);
[~, new_group] = min(d2,[],1);
% recomputing centroids of affected groups
k = min_group(n);
if (new_group ~= k)
mu(:,k)=(mu(:,k)*group_size(k)-X(:,n));
group_size(k) = group_size(k) - 1;
mu(:,k)=mu(:,k)/group_size(k);
mu(:,new_group) = mu(:,new_group)*group_size(new_group)+ X(:,n);
group_size(new_group) = group_size(new_group) + 1;
mu(:,new_group)=mu(:,new_group)/group_size(new_group);
min_group(n) = new_group;
changed = changed + 1;
end
end
%% check convergence
if p1_converged && changed <= N*0.001
break;
else
step = step + 1;
end
end
end
function d2 = distances2(X, mu)
K = size(mu,2);
N = size(X,2);
d2 = zeros(K,N);
for j = 1 : K
d2(j,:) = sum((X - repmat(mu(:,j),1,N)).^2,1);
end
end
function mu = kpp( X,K )
% kmeans++ init
D = size(X,1); %No. of r.v.
N = size(X,2); %No. of observations
mu = zeros(D, K);
mu(:,1) = X(:,round(rand(1) * (size(X, 2)-1)+1));
for k = 2 : K
% computing distances between centroids and observations
d2 = distances2(X, mu(1:k-1));
% assignment
[min_dist, ~] = min(d2,[],1);
% select new centroids by selecting point with the cumulative dist
% value (distance) larger than random value (falls in range between
% dist(n-1) : dist(n), dist(0)= 0)
rv = sum(min_dist) * rand(1);
for n = 1 : N
if min_dist(n) >= rv
mu(:,k) = X(:,n);
break;
else
rv = rv - min_dist(n);
end
end
end
end
I am trying to estimate regression and AR parameters for (loads of) linear regressions with AR error terms. (You could also think of this as a MA process with exogenous variables):
, where
, with lags of length p
I am following the official matlab recommendations and use regArima to set up a number of regressions and extract regression and AR parameters (see reproducible example below).
The problem: regArima is slow! For 5 regressions, matlab needs 14.24sec. And I intend to run a large number of different regression models. Is there any quicker method around?
y = rand(100,1);
r2 = rand(100,1);
r3 = rand(100,1);
r4 = rand(100,1);
r5 = rand(100,1);
exo = [r2 r3 r4 r5];
tic
for p = 0:4
Mdl = regARIMA(3,0,0);
[EstMdl, ~, LogL] = estimate(Mdl,y,'X',exo,'Display','off');
end
toc
Unlike the regArima function which uses Maximum Likelihood, the Cochrane-Orcutt prodecure relies on an iteration of OLS regression. There are a few more particularities when this approach is valid (refer to the link posted). But for the aim of this question, the appraoch is valid, and fast!
I modified James Le Sage's code which covers only AR lags of order 1, to cover lags of order p.
function result = olsc(y,x,arterms)
% PURPOSE: computes Cochrane-Orcutt ols Regression for AR1 errors
%---------------------------------------------------
% USAGE: results = olsc(y,x)
% where: y = dependent variable vector (nobs x 1)
% x = independent variables matrix (nobs x nvar)
%---------------------------------------------------
% RETURNS: a structure
% results.meth = 'olsc'
% results.beta = bhat estimates
% results.rho = rho estimate
% results.tstat = t-stats
% results.trho = t-statistic for rho estimate
% results.yhat = yhat
% results.resid = residuals
% results.sige = e'*e/(n-k)
% results.rsqr = rsquared
% results.rbar = rbar-squared
% results.iter = niter x 3 matrix of [rho converg iteration#]
% results.nobs = nobs
% results.nvar = nvars
% results.y = y data vector
% --------------------------------------------------
% SEE ALSO: prt_reg(results), plt_reg(results)
%---------------------------------------------------
% written by:
% James P. LeSage, Dept of Economics
% University of Toledo
% 2801 W. Bancroft St,
% Toledo, OH 43606
% jpl#jpl.econ.utoledo.edu
% do error checking on inputs
if (nargin ~= 3); error('Wrong # of arguments to olsc'); end;
[nobs nvar] = size(x);
[nobs2 junk] = size(y);
if (nobs ~= nobs2); error('x and y must have same # obs in olsc'); end;
% ----- setup parameters
ITERMAX = 100;
converg = 1.0;
rho = zeros(arterms,1);
iter = 1;
% xtmp = lag(x,1);
% ytmp = lag(y,1);
% truncate 1st observation to feed the lag
% xlag = x(1:nobs-1,:);
% ylag = y(1:nobs-1,1);
yt = y(1+arterms:nobs,1);
xt = x(1+arterms:nobs,:);
xlag = zeros(nobs-arterms,arterms);
for tt = 1 : arterms
xlag(:,nvar*(tt-1)+1:nvar*(tt-1)+nvar) = x(arterms-tt+1:nobs-tt,:);
end
ylag = zeros(nobs-arterms,arterms);
for tt = 1 : arterms
ylag(:,tt) = y(arterms-tt+1:nobs-tt,:);
end
% setup storage for iteration results
iterout = zeros(ITERMAX,3);
while (converg > 0.0001) & (iter < ITERMAX),
% step 1, using intial rho = 0, do OLS to get bhat
ystar = yt - ylag*rho;
xstar = zeros(nobs-arterms,nvar);
for ii = 1 : nvar
tmp = zeros(1,arterms);
for tt = 1:arterms
tmp(1,tt)=ii+nvar*(tt-1);
end
xstar(:,ii) = xt(:,ii) - xlag(:,tmp)*rho;
end
beta = (xstar'*xstar)\xstar' * ystar;
e = y - x*beta;
% truncate 1st observation to account for the lag
et = e(1+arterms:nobs,1);
elagt = zeros(nobs-arterms,arterms);
for tt = 1 : arterms
elagt(:,tt) = e(arterms-tt+1:nobs-tt,:);
end
% step 2, update estimate of rho using residuals
% from step 1
res_rho = (elagt'*elagt)\elagt' * et;
rho_last = rho;
rho = res_rho;
converg = sum(abs(rho - rho_last));
% iterout(iter,1) = rho;
iterout(iter,2) = converg;
iterout(iter,3) = iter;
iter = iter + 1;
end; % end of while loop
if iter == ITERMAX
% error('ols_corc did not converge in 100 iterations');
print('ols_corc did not converge in 100 iterations');
end;
result.iter= iterout(1:iter-1,:);
% after convergence produce a final set of estimates using rho-value
ystar = yt - ylag*rho;
xstar = zeros(nobs-arterms,nvar);
for ii = 1 : nvar
tmp = zeros(1,arterms);
for tt = 1:arterms
tmp(1,tt)=ii+nvar*(tt-1);
end
xstar(:,ii) = xt(:,ii) - xlag(:,tmp)*rho;
end
result.beta = (xstar'*xstar)\xstar' * ystar;
e = y - x*result.beta;
et = e(1+arterms:nobs,1);
elagt = zeros(nobs-arterms,arterms);
for tt = 1 : arterms
elagt(:,tt) = e(arterms-tt+1:nobs-tt,:);
end
u = et - elagt*rho;
result.vare = std(u)^2;
result.meth = 'olsc';
result.rho = rho;
result.iter = iterout(1:iter-1,:);
% % compute t-statistic for rho
% varrho = (1-rho*rho)/(nobs-2);
% result.trho = rho/sqrt(varrho);
(I did not adapt in the last 2 lines the t-test for rho vectors of length p, but this should be straight forward to do..)
I am trying to simulate the rotation dynamics of a system. I am testing my code to verify that it's working using simulation, but I never recovered the parameters I pass to the model. In other words, I can't re-estimate the parameters I chose for the model.
I am using MATLAB for that and specifically ode45. Here is my code:
% Load the input-output data
[torque outputs] = DataLogs2();
u = torque;
% using the simulation data
Ixx = 1.00;
Iyy = 2.00;
Izz = 3.00;
x0 = [0; 0; 0];
Ts = .02;
t = 0:Ts:Ts * ( length(u) - 1 );
[ T, x ] = ode45( #(t,x) rotationDyn( t, x, u(1+floor(t/Ts),:), Ixx, Iyy, Izz), t, x0 );
w = x';
N = length(w);
q = 1; % a counter for the A and B matrices
% The Algorithm
for k=1:1:N
w_telda = [ 0 -w(3, k) w(2,k); ...
w(3,k) 0 -w(1,k); ...
-w(2,k) w(1,k) 0 ];
if k == N % to handle the problem of the last iteration
w_dash(:,k) = (-w(:,k))/Ts;
else
w_dash(:,k) = (w(:,k+1)-w(:,k))/Ts;
end
a = kron( w_dash(:,k)', eye(3) ) + kron( w(:,k)', w_telda );
A(q:q+2,:) = a; % a 3N*9 matrix
B(q:q+2,:) = u(k,:)'; % a 3N*1 matrix % u(:,k)
q = q + 3;
end
% Forcing J to be diagonal. This is the case when we consider our quadcopter as two thin uniform
% rods crossed at the origin with a point mass (motor) at the end of each.
A_new = [A(:, 1) A(:, 5) A(:, 9)];
vec_J_diag = A_new\B;
J_diag = diag([vec_J_diag(1), vec_J_diag(2), vec_J_diag(3)])
eigenvalues_J_diag = eig(J_diag)
error = norm(A_new*vec_J_diag - B)
where my dynamic model is defined as:
function [dw, y] = rotationDyn(t, w, tau, Ixx, Iyy, Izz, varargin)
% The output equation
y = [w(1); w(2); w(3)];
% State equation
% dw = (I^-1)*( tau - cross(w, I*w) );
dw = [Ixx^-1 * tau(1) - ((Izz-Iyy)/Ixx)*w(2)*w(3);
Iyy^-1 * tau(2) - ((Ixx-Izz)/Iyy)*w(1)*w(3);
Izz^-1 * tau(3) - ((Iyy-Ixx)/Izz)*w(1)*w(2)];
end
Practically, what this code should do, is to calculate the eigenvalues of the inertia matrix, J, i.e. to recover Ixx, Iyy, and Izz that I passed to the model at the very begining (1, 2 and 3), but all what I get is wrong results.
Is the problem with using ode45?
Well the problem wasn't in the ode45 instruction, the problem is that in system identification one can create an n-1 samples-signal from an n samples-signal, thus the loop has to end at N-1 in the above code.
I got this code for LARS but when I run, it says undefined X. I can't understand what x is. Why is there an error?
function [beta, A, mu, C, c, gamma] = lars(X, Y, option, t, standardize)
% Least Angle Regression (LAR) algorithm.
% Ref: Efron et. al. (2004) Least angle regression. Annals of Statistics.
% option = 'lar' implements the vanilla LAR algorithm (default);
% option = 'lasso' solves the lasso path with a modified LAR algorithm.
% t -- a vector of increasing positive real numbers. If given, LARS
% returns the solution at t.
%
% Output:
% A -- a sequence of indices that indicate the order of variable
% beta: history of estimated LARS coefficients;
% mu -- history of estimated mean vector;
% C -- history of maximal current absolute corrrelations;
% c -- history of current corrrelations;
% gamma: history of LARS step size.
% Note: history is traced by rows. If t is given, beta is just the
% estimated coefficient vector at the constraint ||beta||_1 = t.
%
% Remarks:
% 1. LARS is originally proposed to estimate a sparse coefficient vector
% a noisy over-determined linear system. LARS outputs estimates for all
% shrinkage/constraint parameters (homotopy).
%
% 2. LARS is well suited for Basis Pursuit (BP) purpose in the real
% automatically terminates when the current correlations for inactive
% all zeros. The recovered coefficient vector is the last column of beta
% with the *lasso* option. Hence, this function provides a fast and
% efficient solution for the ell_1 minimization problem.
% Ref: Donoho and Tsaig (2006). Fast solution of ell_1 norm minimization
if nargin < 5, standardize = true; end
if nargin < 4, t = Inf; end
if nargin < 3, option = 'lar'; end
if strcmpi(option, 'lasso'), lasso = 1; else, lasso = 0; end
eps = 1e-10; % Effective zero
[n,p] = size(X);
if standardize,
X = normalize(X);
Y = Y-mean(Y);
end
m = min(p,n-1); % Maximal number of variables in the final active set
T = length(t);
beta = zeros(1,p);
mu = zeros(n,1); % Mean vector
gamma = []; % LARS step lengths
A = [];
Ac = 1:p;
nVars = 0;
signOK = 1;
i = 0;
mu_old = zeros(n,1);
t_prev = 0;
beta_t = zeros(T,p);
ii = 1;
tt = t;
% LARS loop
while nVars < m,
i = i+1;
c = X'*(Y-mu); % Current correlation
C = max(abs(c)); % Maximal current absolute correlation
if C < eps || isempty(t), break; end % Early stopping criteria
if 1 == i, addVar = find(C==abs(c)); end
if signOK,
A = [A,addVar]; % Add one variable to active set
nVars = nVars+1;
end
s_A = sign(c(A));
Ac = setdiff(1:p,A); % Inactive set
nZeros = length(Ac);
X_A = X(:,A);
G_A = X_A'*X_A; % Gram matrix
invG_A = inv(G_A);
L_A = 1/sqrt(s_A'*invG_A*s_A);
w_A = L_A*invG_A*s_A; % Coefficients of equiangular vector u_A
u_A = X_A*w_A; % Equiangular vector
a = X'*u_A; % Angles between x_j and u_A
beta_tmp = zeros(p,1);
gammaTest = zeros(nZeros,2);
if nVars == m,
gamma(i) = C/L_A; % Move to the least squares projection
else
for j = 1:nZeros,
jj = Ac(j);
gammaTest(j,:) = [(C-c(jj))/(L_A-a(jj)), (C+c(jj))/(L_A+a(jj))];
end
[gamma(i) min_i min_j] = minplus(gammaTest);
addVar = unique(Ac(min_i));
end
beta_tmp(A) = beta(i,A)' + gamma(i)*w_A; % Update coefficient estimates
% Check the sign feasibility of lasso
if lasso,
signOK = 1;
gammaTest = -beta(i,A)'./w_A;
[gamma2 min_i min_j] = minplus(gammaTest);
if gamma2 < gamma(i), % The case when sign consistency gets violated
gamma(i) = gamma2;
beta_tmp(A) = beta(i,A)' + gamma(i)*w_A; % Correct the coefficients
beta_tmp(A(unique(min_i))) = 0;
A(unique(min_i)) = []; % Delete the zero-crossing variable (keep the ordering)
nVars = nVars-1;
signOK = 0;
end
end
if Inf ~= t(1),
t_now = norm(beta_tmp(A),1);
if t_prev < t(1) && t_now >= t(1),
beta_t(ii,A) = beta(i,A) + L_A*(t(1)-t_prev)*w_A'; % Compute coefficient estimates corresponding to a specific t
t(1) = [];
ii = ii+1;
end
t_prev = t_now;
end
mu = mu_old + gamma(i)*u_A; % Update mean vector
mu_old = mu;
beta = [beta; beta_tmp'];
end
if 1 < ii,
noCons = (tt > norm(beta_tmp,1));
if 0 < sum(noCons),
beta_t(noCons,:) = repmat(beta_tmp',sum(noCons),1);
end
beta = beta_t;
end
% Normalize columns of X to have mean zero and length one.
function sX = normalize(X)
[n,p] = size(X);
sX = X-repmat(mean(X),n,1);
sX = sX*diag(1./sqrt(ones(1,n)*sX.^2));
% Find the minimum and its index over the (strictly) positive part of X
% matrix
function [m, I, J] = minplus(X)
% Remove complex elements and reset to Inf
[I,J] = find(0~=imag(X));
for i = 1:length(I),
X(I(i),J(i)) = Inf;
end
X(X<=0) = Inf;
m = min(min(X));
[I,J] = find(X==m);
You can have more information in the related paper:
Efron, Bradley; Hastie, Trevor; Johnstone, Iain; Tibshirani, Robert. Least angle regression. Ann. Statist. 32 (2004), no. 2, 407--499. doi:10.1214/009053604000000067.
http://projecteuclid.org/euclid.aos/1083178935.
I've written some code to implement an algorithm that takes as input a vector q of real numbers, and returns as an output a complex matrix R. The Matlab code below produces a plot showing the input vector q and the output matrix R.
Given only the complex matrix output R, I would like to obtain the input vector q. Can I do this using least-squares optimization? Since there is a recursive running sum in the code (rs_r and rs_i), the calculation for a column of the output matrix is dependent on the calculation of the previous column.
Perhaps a non-linear optimization can be set up to recompose the input vector q from the output matrix R?
Looking at this in another way, I've used an algorithm to compute a matrix R. I want to run the algorithm "in reverse," to get the input vector q from the output matrix R.
If there is no way to recompose the starting values from the output, thereby treating the problem as a "black box," then perhaps the mathematics of the model itself can be used in the optimization? The program evaluates the following equation:
The Utilde(tau, omega) is the output matrix R. The tau (time) variable comprises the columns of the response matrix R, whereas the omega (frequency) variable comprises the rows of the response matrix R. The integration is performed as a recursive running sum from tau = 0 up to the current tau timestep.
Here are the plots created by the program posted below:
Here is the full program code:
N = 1001;
q = zeros(N, 1); % here is the input
q(1:200) = 55;
q(201:300) = 120;
q(301:400) = 70;
q(401:600) = 40;
q(601:800) = 100;
q(801:1001) = 70;
dt = 0.0042;
fs = 1 / dt;
wSize = 101;
Glim = 20;
ginv = 0;
R = get_response(N, q, dt, wSize, Glim, ginv); % R is output matrix
rows = wSize;
cols = N;
figure; plot(q); title('q value input as vector');
ylim([0 200]); xlim([0 1001])
figure; imagesc(abs(R)); title('Matrix output of algorithm')
colorbar
Here is the function that performs the calculation:
function response = get_response(N, Q, dt, wSize, Glim, ginv)
fs = 1 / dt;
Npad = wSize - 1;
N1 = wSize + Npad;
N2 = floor(N1 / 2 + 1);
f = (fs/2)*linspace(0,1,N2);
omega = 2 * pi .* f';
omegah = 2 * pi * f(end);
sigma2 = exp(-(0.23*Glim + 1.63));
sign = 1;
if(ginv == 1)
sign = -1;
end
ratio = omega ./ omegah;
rs_r = zeros(N2, 1);
rs_i = zeros(N2, 1);
termr = zeros(N2, 1);
termi = zeros(N2, 1);
termr_sub1 = zeros(N2, 1);
termi_sub1 = zeros(N2, 1);
response = zeros(N2, N);
% cycle over cols of matrix
for ti = 1:N
term0 = omega ./ (2 .* Q(ti));
gamma = 1 / (pi * Q(ti));
% calculate for the real part
if(ti == 1)
Lambda = ones(N2, 1);
termr_sub1(1) = 0;
termr_sub1(2:end) = term0(2:end) .* (ratio(2:end).^-gamma);
else
termr(1) = 0;
termr(2:end) = term0(2:end) .* (ratio(2:end).^-gamma);
rs_r = rs_r - dt.*(termr + termr_sub1);
termr_sub1 = termr;
Beta = exp( -1 .* -0.5 .* rs_r );
Lambda = (Beta + sigma2) ./ (Beta.^2 + sigma2); % vector
end
% calculate for the complex part
if(ginv == 1)
termi(1) = 0;
termi(2:end) = (ratio(2:end).^(sign .* gamma) - 1) .* omega(2:end);
else
termi = (ratio.^(sign .* gamma) - 1) .* omega;
end
rs_i = rs_i - dt.*(termi + termi_sub1);
termi_sub1 = termi;
integrand = exp( 1i .* -0.5 .* rs_i );
if(ginv == 1)
response(:,ti) = Lambda .* integrand;
else
response(:,ti) = (1 ./ Lambda) .* integrand;
end
end % ti loop
No, you cannot do so unless you know the "model" itself for this process. If you intend to treat the process as a complete black box, then it is impossible in general, although in any specific instance, anything can happen.
Even if you DO know the underlying process, then it may still not work, as any least squares estimator is dependent on the starting values, so if you do not have a good guess there, it may converge to the wrong set of parameters.
It turns out that by using the mathematics of the model, the input can be estimated. This is not true in general, but for my problem it seems to work. The cumulative integral is eliminated by a partial derivative.
N = 1001;
q = zeros(N, 1);
q(1:200) = 55;
q(201:300) = 120;
q(301:400) = 70;
q(401:600) = 40;
q(601:800) = 100;
q(801:1001) = 70;
dt = 0.0042;
fs = 1 / dt;
wSize = 101;
Glim = 20;
ginv = 0;
R = get_response(N, q, dt, wSize, Glim, ginv);
rows = wSize;
cols = N;
cut_val = 200;
imagLogR = imag(log(R));
Mderiv = zeros(rows, cols-2);
for k = 1:rows
val = deriv_3pt(imagLogR(k,:), dt);
val(val > cut_val) = 0;
Mderiv(k,:) = val(1:end-1);
end
disp('Running iteration');
q0 = 10;
q1 = 500;
NN = cols - 2;
qout = zeros(NN, 1);
for k = 1:NN
data = Mderiv(:,k);
qout(k) = fminbnd(#(q) curve_fit_to_get_q(q, dt, rows, data),q0,q1);
end
figure; plot(q); title('q value input as vector');
ylim([0 200]); xlim([0 1001])
figure;
plot(qout); title('Reconstructed q')
ylim([0 200]); xlim([0 1001])
Here are the supporting functions:
function output = deriv_3pt(x, dt)
% Function to compute dx/dt using the 3pt symmetrical rule
% dt is the timestep
N = length(x);
N0 = N - 1;
output = zeros(N0, 1);
denom = 2 * dt;
for k = 2:N0
output(k - 1) = (x(k+1) - x(k-1)) / denom;
end
function sse = curve_fit_to_get_q(q, dt, rows, data)
fs = 1 / dt;
N2 = rows;
f = (fs/2)*linspace(0,1,N2); % vector for frequency along cols
omega = 2 * pi .* f';
omegah = 2 * pi * f(end);
ratio = omega ./ omegah;
gamma = 1 / (pi * q);
termi = ((ratio.^(gamma)) - 1) .* omega;
Error_Vector = termi - data;
sse = sum(Error_Vector.^2);