I am using lsqcurvefit function of matlab to fit o the calculated values by a 'function' to observed data and optimizing two parameters of 'function'. After running the code I get optimized values of parameters but fit between calculated/simulated curve and observed curve is quite bade as can be seen here. I have tried using Marquardt Levenberg algorithm as well as Reflective region and tried reducing function tolerance but of no avail. What can I do to make simulated curve look more closely as the observed curve or is there a software for curve fitting with GUI so that I can manually change my simulated curve to make it look similar to observed curve?
The code I am using is
function wtfinal = fst(para,tes)
x = 45; k = para(1); b = 2; S = para(2); D = k*2/S; tes = 1:998;
g_vecrow = (xlsread('signaal 1.xlsx','signal','D2:D999'))';
g_vec = g_vecrow-g_vecrow(1) ;
t_vec = tes.*5;
for i = 2:998
t = t_vec(i);
g = g_vec(i);
tow = 0:5:t-1;
f = g.*(t - tow).^(-3/2).*exp(-x^2./(4*D*(t - tow)));
wt(i) = ((1/D)^(1/2)* x)/(2 * sqrt(pi))* trapz(tow,f);
end
wtfinal = wt + 147.902;
end
and using this function as
clear all; close all; clc;
ydata = (xlsread('signaal 1.xlsx','signal','C2:C999'))';
tes = 1:998;
x0 = [0.0327 0.00172];
lb = [];
ub = [];
opts = optimset('Algorithm', 'levenberg-marquardt');
[newpara,resnorm,~,exitflag,output]=lsqcurvefit(#fst,x0,tes,ydata,lb,ub,opts)
figure
plot(tes,ydata)
hold on
simulated=fst(newpara,tes);
plot(tes,simulated,'r')
The data file 'signaal 1' can be obtained from here
Related
Some colleagues and I are running out of memory running the following function on a cluster, where F_scattered is a collection of interpolants generated using scatteredInterpolant. Can anyone suggest steps we can take to make the parfor loop less memory-intensive? I’m concerned that the code as written may be sending all of F_scattered to each worker, but I'm not certain how to check if that’s happening. If it helps, there’s more context at the bottom of this message. Thank you in advance for your help.
function [V1q,V2q] = interpolate(F_gridded,X_gridded,F_scattered,X_scattered)
w = F_gridded(X_gridded);
k = fix(w);
w = w-k;
[n,m] = size(F_scattered);
assert(n==2)
F1 = cell(m,1);
F2 = cell(m,1);
parfor j=1:m
kj = k==j;
X = X_scattered(kj,:);
if j<m
W = w(kj);
F1{j} = (1-W).*F_scattered{1,j}(X)+W.*F_scattered{1,j+1}(X);
F2{j} = (1-W).*F_scattered{2,j}(X)+W.*F_scattered{2,j+1}(X);
else
F1{j} = F_scattered{1,j}(X);
F2{j} = F_scattered{2,j}(X);
end
end
Vq = nan(size(w));
for j=1:m
kj = k==j;
V1q(kj) = F1{j};
V2q(kj) = F2{j};
end
end
Additional context is as follows. Some colleagues and I are attempting a collocation on a seven-dimensional grid using an endogenous grid method. The relevant details are as follows:
• Our dimensions are (b1,dstate,omega,u,b2,b3,eps_pi). b1, b2, b3 and eps are continuous. dstate, omega and u are discrete. eps_pi is scattered due to the endogenous grid algorithm, but the grids for a other variables were built from vectors using ndgrid.
• Because scatteredInterpolant will at most accept three dimensional arguments, we construct our interpolants by “slicing” our collocation grid in (b1,dstate,omega,u), then running scatteredInterpolant on the (b2,b3,eps_pi) values associated with each slice.
• Since b1 is the only continuous variable in (b1,dstate,omega,u), we then interpolate by (i) finding the two slices that bracket the query values for (b1,dstate,omega,u), then (ii) running the scattered interpolants associated with those slices on the query values for (b2,b3,eps_pi), and then (iii) taking an appropriately weighted average of the outputs.
Here are the key parts of the code in relatively minimal form:
%%% setting up the grid vectors
b1_t_grid = linspace(0,1,8);
dstate_t_grid = [-1:12];
omega_t_grid = [-2,2];
u_t_grid = [0,1];
b2_t_grid = linspace(0,1,5);
b3_t_grid = linspace(0,1,5);
quasidiff_t_grid = linspace(-9*0.05*4*2,9*0.05*4*2,8); % ``target’’ variable for the endogenous grid method
%%% separating sliced dimensions from the others
gridded_dims_TSnow = [numel(b1_t_grid),numel(dstate_t_grid),numel(omega_t_grid),numel(u_t_grid)];
n_gridded_substates_TSnow = prod(gridded_dims_TSnow);
scattered_dims_TSnow = [numel(b2_t_grid),numel(b3_t_grid),numel(quasidiff_t_grid)];
n_scattered_substates_TSnow = prod(scattered_dims_TSnow);
[b1_ts_TSnow,dstate_ts_TSnow,omega_ts_TSnow,u_ts_TSnow,b2_ts_TSnow,b3_ts_TSnow,quasidiff_ts_TSnow] = ndgrid(b1_t_grid,dstate_t_grid,omega_t_grid,u_t_grid,b2_t_grid,b3_t_grid,quasidiff_t_grid);
b1_ts_TSnow = reshape(b1_ts_TSnow ,n_gridded_substates_TSnow,n_scattered_substates_TSnow);
dstate_ts_TSnow = reshape(dstate_ts_TSnow ,n_gridded_substates_TSnow,n_scattered_substates_TSnow);
omega_ts_TSnow = reshape(omega_ts_TSnow ,n_gridded_substates_TSnow,n_scattered_substates_TSnow);
u_ts_TSnow = reshape(u_ts_TSnow ,n_gridded_substates_TSnow,n_scattered_substates_TSnow);
b2_ts_TSnow = reshape(b2_ts_TSnow ,n_gridded_substates_TSnow,n_scattered_substates_TSnow);
b3_ts_TSnow = reshape(b3_ts_TSnow ,n_gridded_substates_TSnow,n_scattered_substates_TSnow);
quasidiff_ts_TSnow = reshape(quasidiff_ts_TSnow,n_gridded_substates_TSnow,n_scattered_substates_TSnow);
grid_size_TSnow = numel(b1_ts_TSnow);
grid_dims_TSnow = size(b1_ts_TSnow);
%%% initial guess on the dimension to which we’ll ultimately apply the endogenous grid algorithm:
Gamma = #(yhat,omega) 0.05*(yhat-omega) + 0.10*(max(0,yhat-omega)).^2;
CP = #(omega,u) ((omega == 2).*(u == 1) - (omega == -2).*(u == 0))*0.05*4;
CP_ts_TSnow = CP(omega_ts_TSnow,u_ts_TSnow);
yhat_ts_TSnow_UNCnow = (ergoprob_L*omega_L + (1-ergoprob_L)*omega_H)*ones(grid_dims_TSnow);
eps_pi_ts_TSnow = quasidiff_ts_TSnow - Gamma(yhat_ts_TSnow_UNCnow,omega_ts_TSnow) - CP_ts_TSnow;
%%% pre-compute some stuff that will be useful for interpolation
substate_finder_TSnow = griddedInterpolant({b1_t_grid,dstate_t_grid,omega_t_grid,u_t_grid},reshape(1:n_gridded_substates_TSnow,gridded_dims_TSnow));
yhat_t_TSnow_fxns = cell(1,n_gridded_substates_TSnow); pihat_t_TSnow_fxns = cell(1,n_gridded_substates_TSnow);
parfor iii=1:n_gridded_substates_TSnow
yhat_t_TSnow_fxns{iii} = scatteredInterpolant(b2_ts_TSnow(iii,:)',b3_ts_TSnow(iii,:)',eps_pi_ts_TSnow(iii,:)', yhat_ts_TSnow_UNCnow(iii,:)');
pihat_t_TSnow_fxns {iii} = scatteredInterpolant(b2_ts_TSnow(iii,:)',b3_ts_TSnow(iii,:)',eps_pi_ts_TSnow(iii,:)',pihat_ts_TSnow_UNCnow(iii,:)');
end
%%% example of an interpolation, given an arbitrary grid of query points (b1s,dstates,omegas,us,b2s,b3s,epses)
yhats = NaN(size(b1s));
pihats= NaN(size(b1s));
[yhats(:),pihats(:)] = interpolate(substate_finder_TSnow ,[b1s(:),dstates(:), omegas(:),us(:)],...
[yhat_t_TSnow_fxns;pihat_t_TSnow_fxns],[b2s(:),b3(:),epses (:)]);
I need some help... I have got a model function which is :
function y = Surf(param,x);
global af1 af2 tData % A2 mER2
A1 = param(1); m1 = param(2); A2 = param(3); m2 = param(4);
m = param(5); n = param(6);
k1 = #(T) A1*exp(mER1/T);
k2 = #(T) A2*exp(mER2/T);
af = #(T) sech(af1*T+af2);
y = zeros(length(x),1);
for i = 1:length(x)
a = x(i,1); T = temperature(i,1);
y(i) = (k2(T)+k1(T)*(a.^m))*((af(T)-a).^n);
end
end
And I have got a set of Data giving Cure, Cure_rate, Temperature. Which are all in a single vertical column matrix.
Basically, I tried to use :
[output,R1] = lsqcurvefit(#Surf, initial_guess, Cure, Cure_rate)
[output2,R2] = nlinfit(Cure,Cure_rate,#Surf,initial_guess)
And they works pretty well, (my initial_guess are initial guess of parameters in the above model which is in : [1.1e+07 -7.8e+03 1.2e+06 -7.1e+03 2.2 0.72])
My main problem is, when I try to look into different methods which could do nonlinear regression such as fminsearch, fmincon, fsolve, fminunc, etc. They just dont work and I am quite confused about the input that I am considering. Mainly beacuse they dont work as same as nlinfit and lsqcurvefit (input of Cure, Cure_rate), most of them considered the model function and the initial guess only, The way I did the above:
output3 = fminsearch(#Surf,initial_guess)
output4 = fsolve(#Surf,initial_guess)
output5 = fmincon(#Surf,x0,A,b,Aeq,beq)
(Not sure what should I put for Linear Inequality Constraint:
A,b and Aeq,beq )
output6 = fminunc(#Surf,initial_guess)
The problem is Matlab keep saying either I have not enough input or too many input which I don't get it and how should I include my Dataset in the fitting function (Cure, Cure_rate) in the above functions, like in nlinfit and lsqcurvefit?
My calculation involves cosh(x) and sinh(x) when x is around 700 - 1000 which reaches MATLAB's limit and the result is NaN. The problem in the code is elastic_restor_coeff rises when radius is small (below 5e-9 in the code). My goal is to do another integral over a radius distribution from 1e-9 to 100e-9 which is still a work in progress because I get stuck at this problem.
My work around solution right now is to approximate the real part of chi_para with a step function when threshold2 hits a value of about 300. The number 300 is obtained from using the lowest possible value of radius and look at the cut-off value from the plot. I think this approach is not good enough for actual calculation since this value changes with radius so I am looking for a better approximation method. Also, the imaginary part of chi_para is difficult to approximate since it looks like a pulse instead of a step.
Here is my code without an integration over a radius distribution.
k_B = 1.38e-23;
T = 296;
radius = [5e-9,10e-9, 20e-9, 30e-9,100e-9];
fric_coeff = 8*pi*1e-3.*radius.^3;
elastic_restor_coeff = 8*pi*1.*radius.^3;
time_const = fric_coeff/elastic_restor_coeff;
omega_ar = logspace(-6,6,60);
chi_para = zeros(1,length(omega_ar));
chi_perpen = zeros(1,length(omega_ar));
threshold = zeros(1,length(omega_ar));
threshold2 = zeros(1,length(omega_ar));
for i = 1:length(radius)
for k = 1:length(omega_ar)
omega = omega_ar(k);
fric_coeff = 8*pi*1e-3.*radius(i).^3;
elastic_restor_coeff = 8*pi*1.*radius(i).^3;
time_const = fric_coeff/elastic_restor_coeff;
G_para_func = #(t) ((cosh(2*k_B*T./elastic_restor_coeff.*exp(-t./time_const))-1).*exp(1i.*omega.*t))./(cosh(2*k_B*T./elastic_restor_coeff)-1);
G_perpen_func = #(t) ((sinh(2*k_B*T./elastic_restor_coeff.*exp(-t./time_const))).*exp(1i.*omega.*t))./(sinh(2*k_B*T./elastic_restor_coeff));
chi_para(k) = (1 + 1i*omega*integral(G_para_func, 0, inf));
chi_perpen(k) = (1 + 1i*omega*integral(G_perpen_func, 0, inf));
threshold(k) = 2*k_B*T./elastic_restor_coeff*omega;
threshold2(k) = 2*k_B*T./elastic_restor_coeff*(omega*time_const - 1);
end
figure(1);
semilogx(omega_ar,real(chi_para),omega_ar,imag(chi_para));
hold on;
figure(2);
semilogx(omega_ar,real(chi_perpen),omega_ar,imag(chi_perpen));
hold on;
end
Here is the simplified function that I would like to approximate:
where x is iterated in a loop and the maximum value of x is about 700.
I would like to calculate standard errors of contrasts in a linear mixed-effect model (fitlme) in MATLAB.
y = randn(100,1);
area = randi([1 3],100,1);
mea = randi([1 3],100,1);
sub = randi([1 5],100,1);
data = array2table([area mea sub y],'VariableNames',{'area','mea','sub','y'});
data.area = nominal(data.area,{'A','B','C'});
data.mea = nominal(data.mea,{'Baseline','+1h','+8h'});
data.sub = nominal(data.sub);
lme = fitlme(data,'y~area*mea+(1|sub)')
% Plot Area A on three measurements
coefv = table2array(dataset2table(lme.Coefficients(:,2)));
bar([coefv(1),sum(coefv([1 4])),sum(coefv([1 5]))])
Calculating the contrast means, e.g. area1-measurement1 vs area1-measurement2 vs area1-measurement3 can be done by summing the related coefficient parameters. However, does anyone know how to calculate the related standard errors?
I know a hypothesis test can be done by coefTest(lme,H), but only p values can be extracted.
An example for Area A is shown below:
I have resolved this issue!
Matlab uses the 'predict' function to estimate contrasts. To find confidence intervals for area A, at measurement +8h in this particular example use:
dsnew = dataset();
dsnew.area = nominal('A');
dsnew.mea = nominal('+8h');
dsnew.sub = nominal(1);
[yh yCI] = predict(lme,dsnew,'Conditional',false)
A result is shown below:
I have a set of three vectors (stored into a 3xN matrix) which are 'entangled' (e.g. some value in the second row should be in the third row and vice versa). This 'entanglement' is based on looking at the figure in which alpha2 is plotted. To separate the vector I use a difference based approach where I calculate the difference of one value with respect the three next values (e.g. comparing (1,i) with (:,i+1)). Then I take the minimum and store that. The method works to separate two of the three vectors, but not for the last.
I was wondering if you guys can share your ideas with me how to solve this problem (if possible). I have added my coded below.
Thanks in advance!
Problem in figures:
clear all; close all; clc;
%%
alpha2 = [-23.32 -23.05 -22.24 -20.91 -19.06 -16.70 -13.83 -10.49 -6.70;
-0.46 -0.33 0.19 2.38 5.44 9.36 14.15 19.80 26.32;
-1.58 -1.13 0.06 0.70 1.61 2.78 4.23 5.99 8.09];
%%% Original
figure()
hold on
plot(alpha2(1,:))
plot(alpha2(2,:))
plot(alpha2(3,:))
%%% Store start values
store1(1,1) = alpha2(1,1);
store2(1,1) = alpha2(2,1);
store3(1,1) = alpha2(3,1);
for i=1:size(alpha2,2)-1
for j=1:size(alpha2,1)
Alpha1(j,i) = abs(store1(1,i)-alpha2(j,i+1));
Alpha2(j,i) = abs(store2(1,i)-alpha2(j,i+1));
Alpha3(j,i) = abs(store3(1,i)-alpha2(j,i+1));
[~, I] = min(Alpha1(:,i));
store1(1,i+1) = alpha2(I,i+1);
[~, I] = min(Alpha2(:,i));
store2(1,i+1) = alpha2(I,i+1);
[~, I] = min(Alpha3(:,i));
store3(1,i+1) = alpha2(I,i+1);
end
end
%%% Plot to see if separation worked
figure()
hold on
plot(store1)
plot(store2)
plot(store3)
Solution using extrapolation via polyfit:
The idea is pretty simple: Iterate over all positions i and use polyfit to fit polynomials of degree d to the d+1 values from F(:,i-(d+1)) up to F(:,i). Use those polynomials to extrapolate the function values F(:,i+1). Then compute the permutation of the real values F(:,i+1) that fits those extrapolations best. This should work quite well, if there are only a few functions involved. There is certainly some room for improvement, but for your simple setting it should suffice.
function F = untangle(F, maxExtrapolationDegree)
%// UNTANGLE(F) untangles the functions F(i,:) via extrapolation.
if nargin<2
maxExtrapolationDegree = 4;
end
extrapolate = #(f) polyval(polyfit(1:length(f),f,length(f)-1),length(f)+1);
extrapolateAll = #(F) cellfun(extrapolate, num2cell(F,2));
fitCriterion = #(X,Y) norm(X(:)-Y(:),1);
nFuncs = size(F,1);
nPoints = size(F,2);
swaps = perms(1:nFuncs);
errorOfFit = zeros(1,size(swaps,1));
for i = 1:nPoints-1
nextValues = extrapolateAll(F(:,max(1,i-(maxExtrapolationDegree+1)):i));
for j = 1:size(swaps,1)
errorOfFit(j) = fitCriterion(nextValues, F(swaps(j,:),i+1));
end
[~,j_bestSwap] = min(errorOfFit);
F(:,i+1) = F(swaps(j_bestSwap,:),i+1);
end
Initial solution: (not that pretty - Skip this part)
This is a similar solution that tries to minimize the sum of the derivatives up to some degree of the vector valued function F = #(j) alpha2(:,j). It does so by stepping through the positions i and checks all possible permutations of the coordinates of i to get a minimal seminorm of the function F(1:i).
(I'm actually wondering right now if there is any canonical mathematical way to define the seminorm so we get our expected results... I initially was going for the H^1 and H^2 seminorms, but they didn't quite work...)
function F = untangle(F)
nFuncs = size(F,1);
nPoints = size(F,2);
seminorm = #(x,i) sum(sum(abs(diff(x(:,1:i),1,2)))) + ...
sum(sum(abs(diff(x(:,1:i),2,2)))) + ...
sum(sum(abs(diff(x(:,1:i),3,2)))) + ...
sum(sum(abs(diff(x(:,1:i),4,2))));
doSwap = #(x,swap,i) [x(:,1:i-1), x(swap,i:end)];
swaps = perms(1:nFuncs);
normOfSwap = zeros(1,size(swaps,1));
for i = 2:nPoints
for j = 1:size(swaps,1)
normOfSwap(j) = seminorm(doSwap(F,swaps(j,:),i),i);
end
[~,j_bestSwap] = min(normOfSwap);
F = doSwap(F,swaps(j_bestSwap,:),i);
end
Usage:
The command alpha2 = untangle(alpha2); will untangle your functions:
It should even work for more complicated data, like these shuffled sine-waves:
nPoints = 100;
nFuncs = 5;
t = linspace(0, 2*pi, nPoints);
F = bsxfun(#(a,b) sin(a*b), (1:nFuncs).', t);
for i = 1:nPoints
F(:,i) = F(randperm(nFuncs),i);
end
Remark: I guess if you already know that your functions will be quadratic or some other special form, RANSAC would be a better idea for larger number of functions. This could also be useful if the functions are not given with the same x-value spacing.