Potential Bug in MATLAB regress R2014a - matlab

MATLAB R2014a used to work fine w regress but now I get an error when the variables are fine and rank is satisfactory.
X = rand([10 3])
X =
0.8407 0.3517 0.0759
0.2543 0.8308 0.0540
0.8143 0.5853 0.5308
0.2435 0.5497 0.7792
0.9293 0.9172 0.9340
0.3500 0.2858 0.1299
0.1966 0.7572 0.5688
0.2511 0.7537 0.4694
0.6160 0.3804 0.0119
0.4733 0.5678 0.3371
K>> Y = rand([10 1])
Y =
0.1622
0.7943
0.3112
0.5285
0.1656
0.6020
0.2630
0.6541
0.6892
0.7482
[B,BINT] = regress(Y,X)
Subscript indices must either be real positive integers or logicals.
Error in regress (line 93)
b(perm) = R \ (Q'*y);
Obviously, X and Y are fine. There's something wrong w/ the matrix math in regress and it's that perm is, for some reason, being output as a vector (giving an ind error). A few lines above, qr is called like this with no further modification to perm:
[Q,R,perm] = qr(X,0);
Help file says qr is supposed to output a third argument that's a matrix - but how can that be if the math always expects a vector?
% [Q,R,E] = QR(A) produces unitary Q, upper triangular R and a
% permutation matrix E so that A*E = Q*R. The column permutation E is
% chosen so that ABS(DIAG(R)) is decreasing.
Very confusing considering both of these are built-in functions. I literally re-installed MATLAB R2014a and a few toolboxes and STILL get this error. It feels like qr got updated to output a different argument, but I don't understand why a fresh reinstall wouldn't take care of this, or why qr would have updated at all anyways. Everything else in my MATLAB is great.
Any ideas???

Related

how to fix Matrix dimensions must agree

I am trying to plot Z0 in term of t
I get this error Matrix dimensions must agree.
I know that ‘T’ is a scalar, and ‘X’ and ‘Y’ are (51x26) matrices, and ‘t’ is a (1x501) vector.It is not possible to multiply them, because they are not conformable to matrix multiplication
I need a solution please I am new to MATLAB
sigma0=0;
el= 0.5;
L= 1 ;
h= 0.5 ;
a= 1;
N= 3;
g=10;
rho=1000;
Z0=0;
t=0:0.01:5;
x=0:0.02:el;
y=0:0.02:L;
[X,Y]=mesh grid (x,y);
sigma=0;
T=3*pi/4;
for n=0:N
for m=0:N
A=pi*((m/el)^(2)+(n/L)^(2))^(0.5);
B=(g*A+(sigma/rho)*A^3)*atan(A*h);
C=B^(0.5);
Z=a*cos(C.*T).*cos((m*pi/el).*X).*cos((n*pi/L).*Y);
Zs=Z0+Z;
Z0=Zs;
end
end
m=3;
n=4;
A=pi*((m/el)^(2)+(n/L)^(2))^(0.5);
B0=(g*A+(sigma0./rho)*A^3)*atan(A*h);
C0=(B0.^(0.5));
Z0=(a.*cos(C0.*T)).*cos((m*pi/el).*X).*(cos((n*pi/L).*Y));
figure
subplot(221)
plot(t,length(Z0));
xlabel(' temps s');
ylabel(' élévation z(x,y)y');
title(' sans tension superficielle');
legend('sigma0')
The result I expect to see a sinusoidal figure
I think all you need is replacing plot(t,length(Z0)); with plot(1:length(Z0), Z0);
I am not getting an error message, and I can't see where you try to multiple by t.
Execute clear all just in case...
Except that, you posted a syntax error [X,Y]=mesh grid (x,y); should be [X,Y]=meshgrid (x,y);.
Replace plot(t,length(Z0));
With: plot(1:length(Z0), Z0);
Here is the result:

How Can fit a curve to step function?

I am trying to fit curve to a step function. I tried no. of approaches like using sigmoid function,using ratio of polynomials, fitting Gauss function to derivative of step, but none of them are looking okay. Now, I came up with the idea of creating a perfect step and compute convolution of perfect step to a Gauss function and find best fit parameter using non-linear regression.
But this is also not looking good.
I am writing here code both for Sigmoid and convolution approach.
First with Sigmoid Function fit:
Function:
function d=fit_sig(param,x,y)
a=param(1);
b=param(2);
d=(a./(1+exp(-b*x)))-y;
end
main code:
a=1, b=0.09;
p0=[a,b];
sig_coff=lsqnonlin(#fit_sig,p0,[],[],[],xavg_40s1,havg_40s1);
% Plot the original and experimental data.
sig_new = sig_coff(1)./(1+exp(-sig_coff(2)*xavg_40s1));
d= havg_40s1-step_new;
figure;
plot(xavg_40s1,havg_40s1,'.-r',xavg_40s1,sig_new,'.-b');
xlabel('x-pixel'); ylabel('dz/dx (mm/pixel)'); axis square;
This is not working at all. I think my initial guesses are wrong. I tried multiple numbers but could not get it correct. I tried using curve fitting tool too but that is also not working.
Code for creating perfect step:
h=ones(1,numel(havg_40s1)); %height=1mm
h(1:81)=-0.038;
h(82:end)=1.002; %or 1.0143
figure;
plot(xavg_40s1,havg_40s1,'k.-', 'linewidth',1.5, 'markersize',16);
hold on
plot(xavg_40s1,h,'.-r','linewidth',1.5,'markersize',12);
Code using convolution approach:
Function:
function d=fit_step(param,h,x,y)
A=param(1);
mu=param(2);
sigma=param(3);
d=conv(h,A*exp(-((x-mu)/sigma).^2),'same')-y;
end
main code:
param1=[0.2247 8.1884 0.0802];
step_coff=lsqnonlin(#fit_step,param1,[],[],[],h,dx_40s1,havg_40s1);
% Plot the original and experimental data.
step_new = conv(h,step_coff(1)*exp(-((dx_40s1-step_coff(2))/step_coff(3)).^2),'same');
figure;
plot(xavg_40s1,havg_40s1,'.-r',xavg_40s1,step_new,'.-b');
This is close but edge of step has been shifted plus corners are looking sharper than measured step.
Could someone please help me out the best way to fit a step function or any suggestion to improve the code??
X data:
12.6400 12.6720 12.7040 12.7360 12.7680 12.8000 12.8320 12.8640 12.8960 12.9280 12.9600 12.9920 13.0240 13.0560 13.0880 13.1200 13.1520 13.1840 13.2160 13.2480 13.2800 13.3120 13.3440 13.3760 13.4080 13.4400 13.4720 13.5040 13.5360 13.5680 13.6000 13.6320 13.6640 13.6960 13.7280 13.7600 13.7920 13.8240 13.8560 13.8880 13.9200 13.9520 13.9840 14.0160 14.0480 14.0800 14.1120 14.1440 14.1760 14.2080 14.2400 14.272 14.3040 14.3360 14.3680 14.4000 14.4320 14.4640 14.4960 14.5280 14.5600 14.5920 14.6240 14.6560 14.6880 14.7200 14.7520 14.7840 14.8160 14.8480 14.8800 14.9120 14.9440 14.9760 15.0080 15.0400 15.0720 15.1040 15.1360 15.1680 15.2000 15.2320 15.2640 15.2960 15.3280 15.3600 15.3920 15.4240 15.4560 15.4880 15.5200 15.5520 15.5840 15.6160 15.6480 15.6800 15.7120 15.7440 15.7760 15.8080 15.8400 15.8720 15.9040 15.9360
15.9680 16.0000 16.0320 16.0640 16.0960 16.1280 16.1600 16.1920 16.2240 16.2560 16.2880 16.3200 16.3520 16.3840 16.4160 16.4480 16.4800 16.5120 16.5440 16.5760 16.6080 16.6400 16.6720 16.7040 16.7360 16.7680 16.8000 16.8320 16.8640 16.8960 16.9280 16.9600 16.9920 17.0240 17.0560 17.0880 17.1200 17.1520 17.1840 17.2160 17.2480 17.2800 17.3120 17.3440 17.3760 17.4080 17.4400 17.4720 17.5040 17.5360 17.5680 17.6000 17.6320 17.6640 17.6960 17.7280 17.7600
Y Data:
-0.0404 -0.0405 -0.0350 -0.0406 -0.0412 -0.0407 -0.0378 -0.0405 -0.0337 -0.0417 -0.0413 -0.0387 -0.0352 -0.0373 -0.0369 -0.0388 -0.0384 -0.0351 -0.0401 -0.0314 -0.0375 -0.0390 -0.0330 -0.0343 -0.0341 -0.0369 -0.0424 -0.0369 -0.0309 -0.0387 -0.0346 -0.0433 -0.0410 -0.0355 -0.0343 -0.0396 -0.0369 -0.0400 -0.0377 -0.0330 -0.0416 -0.0348 -0.0380 -0.0338 -0.0349 -0.0359 -0.0418 -0.0336 -0.0375 -0.0309 -0.0362 -0.0422 -0.0437 -0.0352 -0.0303 -0.0335 -0.0358 -0.0467 -0.0341 -0.0306 -0.0322 -0.0338 -0.0418 -0.0417 -0.0299 -0.0264 -0.0308 -0.0352 -0.0330 -0.0261 -0.0088 -0.0071 0.0013 0.0012 0.0151 0.0352 0.0475 0.0764 0.1423 0.2617 0.4057 0.6241 0.8076 0.8872 0.9248 0.9340 0.9395 0.9514 0.9650 0.9708 0.9875 0.9852 0.9955 0.9971 0.9966 0.9981 0.9983 0.9932 1.0013 1.0011 0.9961 1.0044 0.9994 1.0028 1.0028 0.9996 1.0009 1.0024 1.0027 1.0075 1.0017 1.0001 1.0033 1.0062 1.0071 1.0032 1.0026 1.0027 1.0062 1.0063 0.9981 1.0025 0.9994 1.0075 1.0026 1.0035 1.0018 0.9999 1.0045 1.0067 0.9980 1.0044 0.9976 0.9976 1.0087 1.0026 1.0010 0.9997 1.0025 0.9943 1.0098 0.9964 0.9994 0.9973 0.9997 1.0084 1.0035 0.9974 0.9967 0.9967 1.0013 1.0060 1.0026 0.9960 0.9970 0.9987 1.0054 1.0048 0.9952 0.9937 0.9972
Attached are the images of the measured step and fitted curve.
Why not take a simple approach? Smooth your data, compute it’s derivative, then find the max of that derivative. You can do the first two steps by convolving with the derivative of a Gaussian, which is easy to generate.
The location of the max is the shift of the step function you’re trying to fit. The mean of the values to the left and the mean of the values to the right are the low and high values of the step function.
From first principles (toolboxes will make all of these steps simpler), the Gaussian gradient is computed like this:
x = [12.6400 12.6720 12.7040 12.7360 12.7680 12.8000 12.8320 12.8640 12.8960 12.9280 12.9600 12.9920 13.0240 13.0560 13.0880 13.1200 13.1520 13.1840 13.2160 13.2480 13.2800 13.3120 13.3440 13.3760 13.4080 13.4400 13.4720 13.5040 13.5360 13.5680 13.6000 13.6320 13.6640 13.6960 13.7280 13.7600 13.7920 13.8240 13.8560 13.8880 13.9200 13.9520 13.9840 14.0160 14.0480 14.0800 14.1120 14.1440 14.1760 14.2080 14.2400 14.272 14.3040 14.3360 14.3680 14.4000 14.4320 14.4640 14.4960 14.5280 14.5600 14.5920 14.6240 14.6560 14.6880 14.7200 14.7520 14.7840 14.8160 14.8480 14.8800 14.9120 14.9440 14.9760 15.0080 15.0400 15.0720 15.1040 15.1360 15.1680 15.2000 15.2320 15.2640 15.2960 15.3280 15.3600 15.3920 15.4240 15.4560 15.4880 15.5200 15.5520 15.5840 15.6160 15.6480 15.6800 15.7120 15.7440 15.7760 15.8080 15.8400 15.8720 15.9040 15.9360 15.9680 16.0000 16.0320 16.0640 16.0960 16.1280 16.1600 16.1920 16.2240 16.2560 16.2880 16.3200 16.3520 16.3840 16.4160 16.4480 16.4800 16.5120 16.5440 16.5760 16.6080 16.6400 16.6720 16.7040 16.7360 16.7680 16.8000 16.8320 16.8640 16.8960 16.9280 16.9600 16.9920 17.0240 17.0560 17.0880 17.1200 17.1520 17.1840 17.2160 17.2480 17.2800 17.3120 17.3440 17.3760 17.4080 17.4400 17.4720 17.5040 17.5360 17.5680 17.6000 17.6320 17.6640 17.6960 17.7280 17.7600];
y = [-0.0404 -0.0405 -0.0350 -0.0406 -0.0412 -0.0407 -0.0378 -0.0405 -0.0337 -0.0417 -0.0413 -0.0387 -0.0352 -0.0373 -0.0369 -0.0388 -0.0384 -0.0351 -0.0401 -0.0314 -0.0375 -0.0390 -0.0330 -0.0343 -0.0341 -0.0369 -0.0424 -0.0369 -0.0309 -0.0387 -0.0346 -0.0433 -0.0410 -0.0355 -0.0343 -0.0396 -0.0369 -0.0400 -0.0377 -0.0330 -0.0416 -0.0348 -0.0380 -0.0338 -0.0349 -0.0359 -0.0418 -0.0336 -0.0375 -0.0309 -0.0362 -0.0422 -0.0437 -0.0352 -0.0303 -0.0335 -0.0358 -0.0467 -0.0341 -0.0306 -0.0322 -0.0338 -0.0418 -0.0417 -0.0299 -0.0264 -0.0308 -0.0352 -0.0330 -0.0261 -0.0088 -0.0071 0.0013 0.0012 0.0151 0.0352 0.0475 0.0764 0.1423 0.2617 0.4057 0.6241 0.8076 0.8872 0.9248 0.9340 0.9395 0.9514 0.9650 0.9708 0.9875 0.9852 0.9955 0.9971 0.9966 0.9981 0.9983 0.9932 1.0013 1.0011 0.9961 1.0044 0.9994 1.0028 1.0028 0.9996 1.0009 1.0024 1.0027 1.0075 1.0017 1.0001 1.0033 1.0062 1.0071 1.0032 1.0026 1.0027 1.0062 1.0063 0.9981 1.0025 0.9994 1.0075 1.0026 1.0035 1.0018 0.9999 1.0045 1.0067 0.9980 1.0044 0.9976 0.9976 1.0087 1.0026 1.0010 0.9997 1.0025 0.9943 1.0098 0.9964 0.9994 0.9973 0.9997 1.0084 1.0035 0.9974 0.9967 0.9967 1.0013 1.0060 1.0026 0.9960 0.9970 0.9987 1.0054 1.0048 0.9952 0.9937 0.9972];
sigma = 3;
cutoff = ceil(4*sigma);
kernel = -cutoff:cutoff;
kernel = -kernel .* exp(-0.5 * kernel.^2 / sigma.^2);
grad = conv(y,kernel,'same');
We can find the maximum sample with max:
[~,ii] = max(grad);
This is the sample nearest the middle of the transition point. We can refine this location by fitting a parabola to the 3 samples around the peak:
px = x(ii-1:ii+1).';
py = grad(ii-1:ii+1).';
% solve the equation: py = [px.*px, px, ones(3,1)] * params;
params = [px.*px, px, ones(3,1)] \ py;
x_max = -params(2)/(2*params(1));
Finally, we might want to include the y values before and after the transition into the fitting:
left = median(y(x<x_max));
right = median(y(x>x_max));
(Though we might want to assume that left=0 and right=1.)
Plotting:
plot(x,y)
hold on
plot([x(1),x_max,x_max,x(end)],[left,left,right,right])
To fit a full error function (which is the integral of the Gaussian) we just need one more step: above we fitted a parabola to the three samples around the maximum, now instead we fit a parabola to the logarithm of the y values (see this other Q&A for an explanation), and select all y values above 0.2 times the peak value for this fit to avoid fitting to noise. This is approximately 2 sigma, and should be enough to obtain an accurate estimate of the Gaussian peak. The parameters of the Gaussian peak are also the parameters to the smoothed error function, we can correct the estimated sigma for this additional smoothing:
% using grad from the code above (as well as x and y)
[m,ii] = max(grad);
w = sum(grad > m * 0.2) / 2;
px = x(ii-w:ii+w).';
py = log(grad(ii-w:ii+w)).';
% solve the equation: py = [px.*px, px, ones(3,1)] * params;
params = [px.*px, px, ones(size(px))] \ py;
% obtain Gaussian parameters
fitted_mu = -params(2)/(2*params(1));
fitted_sigma = sqrt(-0.5/params(1));
% correct for smoothing applied
fitted_sigma = sqrt(fitted_sigma^2 - (sigma*mean(diff(x)))^2);
% evaluated fitted function
fitted_y = (erf((x-fitted_mu)/fitted_sigma) + 1) / 2 * (right-left) + left;
clf
plot(x,y)
hold on
plot(x,fitted_y)
Here is an example graphical fitter using your data and a sigmoid from my equation search. This example uses the standard scipy differential_evolution genetic algorithm module to determine initial parameter estimates for curve fitting, and that module uses the Latin Hypercube algorithm to ensure a thorough search of parameter space requiring bounds within which to search. In this example the search bounds are derived from the data. Note that it is much easier to estimate ranges for the initial parameter estimates rather than specific values.
import numpy, scipy, matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.optimize import differential_evolution
import warnings
xData = numpy.array([12.6400, 12.6720, 12.7040, 12.7360, 12.7680, 12.8000, 12.8320, 12.8640, 12.8960, 12.9280, 12.9600, 12.9920, 13.0240, 13.0560, 13.0880, 13.1200, 13.1520, 13.1840, 13.2160, 13.2480, 13.2800, 13.3120, 13.3440, 13.3760, 13.4080, 13.4400, 13.4720, 13.5040, 13.5360, 13.5680, 13.6000, 13.6320, 13.6640, 13.6960, 13.7280, 13.7600, 13.7920, 13.8240, 13.8560, 13.8880, 13.9200, 13.9520, 13.9840, 14.0160, 14.0480, 14.0800, 14.1120, 14.1440, 14.1760, 14.2080, 14.2400, 14.272, 14.3040, 14.3360, 14.3680, 14.4000, 14.4320, 14.4640, 14.4960, 14.5280, 14.5600, 14.5920, 14.6240, 14.6560, 14.6880, 14.7200, 14.7520, 14.7840, 14.8160, 14.8480, 14.8800, 14.9120, 14.9440, 14.9760, 15.0080, 15.0400, 15.0720, 15.1040, 15.1360, 15.1680, 15.2000, 15.2320, 15.2640, 15.2960, 15.3280, 15.3600, 15.3920, 15.4240, 15.4560, 15.4880, 15.5200, 15.5520, 15.5840, 15.6160, 15.6480, 15.6800, 15.7120, 15.7440, 15.7760, 15.8080, 15.8400, 15.8720, 15.9040, 15.9360, 15.9680, 16.0000, 16.0320, 16.0640, 16.0960, 16.1280, 16.1600, 16.1920, 16.2240, 16.2560, 16.2880, 16.3200, 16.3520, 16.3840, 16.4160, 16.4480, 16.4800, 16.5120, 16.5440, 16.5760, 16.6080, 16.6400, 16.6720, 16.7040, 16.7360, 16.7680, 16.8000, 16.8320, 16.8640, 16.8960, 16.9280, 16.9600, 16.9920, 17.0240, 17.0560, 17.0880, 17.1200, 17.1520, 17.1840, 17.2160, 17.2480, 17.2800, 17.3120, 17.3440, 17.3760, 17.4080, 17.4400, 17.4720, 17.5040, 17.5360, 17.5680, 17.6000, 17.6320, 17.6640, 17.6960, 17.7280, 17.7600])
yData = numpy.array([-0.0404, -0.0405, -0.0350, -0.0406, -0.0412, -0.0407, -0.0378, -0.0405, -0.0337, -0.0417, -0.0413, -0.0387, -0.0352, -0.0373, -0.0369, -0.0388, -0.0384, -0.0351, -0.0401, -0.0314, -0.0375, -0.0390, -0.0330, -0.0343, -0.0341, -0.0369, -0.0424, -0.0369, -0.0309, -0.0387, -0.0346, -0.0433, -0.0410, -0.0355, -0.0343, -0.0396, -0.0369, -0.0400, -0.0377, -0.0330, -0.0416, -0.0348, -0.0380, -0.0338, -0.0349, -0.0359, -0.0418, -0.0336, -0.0375, -0.0309, -0.0362, -0.0422, -0.0437, -0.0352, -0.0303, -0.0335, -0.0358, -0.0467, -0.0341, -0.0306, -0.0322, -0.0338, -0.0418, -0.0417, -0.0299, -0.0264, -0.0308, -0.0352, -0.0330, -0.0261, -0.0088, -0.0071, 0.0013, 0.0012, 0.0151, 0.0352, 0.0475, 0.0764, 0.1423, 0.2617, 0.4057, 0.6241, 0.8076, 0.8872, 0.9248, 0.9340, 0.9395, 0.9514, 0.9650, 0.9708, 0.9875, 0.9852, 0.9955, 0.9971, 0.9966, 0.9981, 0.9983, 0.9932, 1.0013, 1.0011, 0.9961, 1.0044, 0.9994, 1.0028, 1.0028, 0.9996, 1.0009, 1.0024, 1.0027, 1.0075, 1.0017, 1.0001, 1.0033, 1.0062, 1.0071, 1.0032, 1.0026, 1.0027, 1.0062, 1.0063, 0.9981, 1.0025, 0.9994, 1.0075, 1.0026, 1.0035, 1.0018, 0.9999, 1.0045, 1.0067, 0.9980, 1.0044, 0.9976, 0.9976, 1.0087, 1.0026, 1.0010, 0.9997, 1.0025, 0.9943, 1.0098, 0.9964, 0.9994, 0.9973, 0.9997, 1.0084, 1.0035, 0.9974, 0.9967, 0.9967, 1.0013, 1.0060, 1.0026, 0.9960, 0.9970, 0.9987, 1.0054, 1.0048, 0.9952, 0.9937, 0.9972])
def func(x, a, b, Offset): # Sigmoid A with Offset from zunzun.com
return 1.0 / (1.0 + numpy.exp(-1.0 * a * (x - b))) + Offset
# function for genetic algorithm to minimize (sum of squared error)
def sumOfSquaredError(parameterTuple):
warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm
val = func(xData, *parameterTuple)
return numpy.sum((yData - val) ** 2.0)
def generate_Initial_Parameters():
# min and max used for bounds
maxX = max(xData)
minX = min(xData)
maxY = max(yData)
minY = min(yData)
parameterBounds = []
parameterBounds.append([minX, maxX]) # search bounds for a
parameterBounds.append([minX, maxX]) # search bounds for b
parameterBounds.append([minY, maxY]) # search bounds for Offset
# "seed" the numpy random number generator for repeatable results
result = differential_evolution(sumOfSquaredError, parameterBounds, seed=3)
return result.x
# by default, differential_evolution completes by calling curve_fit() using parameter bounds
geneticParameters = generate_Initial_Parameters()
# now call curve_fit without passing bounds from the genetic algorithm,
# just in case the best fit parameters are aoutside those bounds
fittedParameters, pcov = curve_fit(func, xData, yData, geneticParameters)
print('Fitted parameters:', fittedParameters)
print()
modelPredictions = func(xData, *fittedParameters)
absError = modelPredictions - yData
SE = numpy.square(absError) # squared errors
MSE = numpy.mean(SE) # mean squared errors
RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (numpy.var(absError) / numpy.var(yData))
print()
print('RMSE:', RMSE)
print('R-squared:', Rsquared)
print()
##########################################################
# graphics output section
def ModelAndScatterPlot(graphWidth, graphHeight):
f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100)
axes = f.add_subplot(111)
# first the raw data as a scatter plot
axes.plot(xData, yData, 'D')
# create data for the fitted equation plot
xModel = numpy.linspace(min(xData), max(xData))
yModel = func(xModel, *fittedParameters)
# now the model as a line plot
axes.plot(xModel, yModel)
axes.set_xlabel('X Data') # X axis data label
axes.set_ylabel('Y Data') # Y axis data label
plt.show()
plt.close('all') # clean up after using pyplot
graphWidth = 800
graphHeight = 600
ModelAndScatterPlot(graphWidth, graphHeight)

Euler's method in MATLAB: code doesn't work

For my computing course, I am given the following function:
y'(x) = -8y(x) + 0.5x + 1/16 and an initial value of y(0)=2.
Now, I am asked to solve this equation by using Euler's method, in MATLAB.
My code should give an output of 2 arrays: xar and yar, in which I see the x-values vs. the y-values, however, if I run my code, it says: "undefined variable x". Here's my code:
function [xar,yar] = Euler(a,b,ybouco,N)
% a is the lower limit
% b is the upper limit
% ybouco is the initial value
% N is the number of intervals
a=0;
b=3;
ybouco=2;
N=10;
h=(b-a)/N;
T=a:h:b;
y(1)=ybouco;
f = #(x) -8*y(x) + 0.5x + (1/16);
y(x) = 2*exp(-8*x)+(1/16)*x;
for i = 1:N
y (i+1) = y(i)+h*f(T(i));
end
end
Can someone explain what is wrong with my code??
First of all, note that assigning argument parameters in the function block is wrong! (i.e. a,b,ybouco and N) should be passed through argument by calling the function. There is no use in writing arguments to be assigned by user beside assigning them in the script manually.
One way is to call the function and assign the value in the command window like below:
[x,y]=Euler(0,3,2,10)
where a=0, b=3, ybouco=2 and N=10 was passed to the function as an input and x and y returned by the function as output.
Plus, when you are solving an ODE numerically it means you do not know y analytically.
So you should omit the assigning part of the code and do a little change like below:
function [xar,yar] = Euler(a,b,ybouco,N)
h=(b-a)/N;
T=a:h:b;
y(1)=ybouco;
for i = 1:N
f(i) = -8*y(i) + 0.5*T(i) + (1/16);
y(i+1) = y(i)+h*f(i);
end
xar=T;
yar=y;
end
Then by calling the function in the command window, you will get the following results:
x =
Columns 1 through 8
0 0.3000 0.6000 0.9000 1.2000 1.5000 1.8000 2.1000
Columns 9 through 11
2.4000 2.7000 3.0000
y =
Columns 1 through 8
2.0000 -2.7813 3.9575 -5.4317 7.7582 -10.6627 15.1716 -20.9515
Columns 9 through 11
29.6658 -41.1533 58.0384
You can also plot the result and get the following graph:
If you increase N from 10 to 100 you will have more accurate results and a smooth graph like below:
The error message is because you have an assignment
y(x) = 2*exp(-8*x)+(1/16)*x;
where x is not defined. The x in y(x) indexes into the array y.
Maybe you intended to write
y = #(x) 2*exp(-8*x)+(1/16)*x;
to define an anonymous function. But that would clash with the array y you have already defined. Maybe just delete this line?
Also,
h=(b-a)/N;
T=a:h:b;
can be better written as
T = linspace(a,b,N);

How to use Matlab for non linear least squares Michaelis–Menten parameters estimation

I have a set of measurements and I started making a linear approximation (as in this plot). A linear least squares estimation of the parameters V_{max} and K_{m} from this code in Matlab:
data=[2.0000 0.0615
2.0000 0.0527
0.6670 0.0334
0.6670 0.0334
0.4000 0.0138
0.4000 0.0258
0.2860 0.0129
0.2860 0.0183
0.2220 0.0083
0.2200 0.0169
0.2000 0.0129
0.2000 0.0087 ];
x = 1./data(:,1);
y = 1./data(:,2);
J = [x,ones(length(x),1)];
k = J\y;
vmax = 1/k(2);
km = k(1)*vmax;
lse = (vmax.*data(:,1))./(km+data(:,1));
plot(data(:,1),data(:,2),'o','color','red','linewidth',1)
line(data(:,1),lse,'linewidth',2)
This yields a fit that looks alright. Next, I wanted to do the same thing but with non-linear least squares. However, the fit always looks wrong, here is the code for that attempt:
options = optimset('MaxIter',10000,'MaxFunEvals',50000,'FunValCheck',...
'on','Algorithm',{'levenberg-marquardt',.00001});
p=lsqnonlin(#myfun,[0.1424,2.5444]);
lse = (p(1).*data(:,1))./(p(2)+data(:,1));
plot(data(:,1),data(:,2),'o','color','red','linewidth',1)
line(data(:,1),lse,'linewidth',2)
which requires this function in an M-File:
function F = myfun(x)
F = data(:,2)-(x(1).*data(:,1))./x(2)+data(:,1);
If you run the code you will see my problem. But hopefully, unlike me, you see what I'm doing wrong.
I think that you forgot some parentheses (some others are superfluous) in your nonlinear function. Using an anonymous function:
myfun = #(x)data(:,2)-x(1).*data(:,1)./(x(2)+data(:,1)); % Parentheses were missing
options = optimset('MaxIter',10000,'MaxFunEvals',50000,'FunValCheck','on',...
'Algorithm',{'levenberg-marquardt',.00001});
p = lsqnonlin(myfun,[0.1424,2.5444],[],[],options);
lse = p(1).*data(:,1)./(p(2)+data(:,1));
plot(data(:,1),data(:,2),'o','color','red','linewidth',1)
line(data(:,1),lse,'linewidth',2)
You also weren't actually applying any of your options.
You might look into using lsqcurvefit instead as it was designed for data fitting problems:
myfun = #(x,dat)x(1).*dat./(x(2)+dat);
options = optimset('MaxIter',10000,'MaxFunEvals',50000,'FunValCheck','on',...
'Algorithm',{'levenberg-marquardt',.00001});
p = lsqcurvefit(myfun,[0.1424,2.5444],data(:,1),data(:,2),[],[],options);
lse = myfun(p,data(:,1));
plot(data(:,1),data(:,2),'o','color','red','linewidth',1)
line(data(:,1),lse,'linewidth',2)

Unexpected behaviour of function findpeaks in MATLAB's Signal Processing Toolbox

Edit: Actually this is not unexpected behaviour, but I still need a solution.
findpeaks compares each element of data to its neighboring values.
I have data which contains peaks which I detect with the function findpeaks from the Signal Processing Toolbox. Sometimes the function seems not to detect the peaks properly, when I have the same value twice next to each other. This occurs very rarly in my data, but here is a sample to illustrate my problem:
>> values
values =
-0.0324
-0.0371
-0.0393
-0.0387
-0.0331
-0.0280
-0.0216
-0.0134
-0.0011
0.0098
0.0217
0.0352
0.0467
0.0548
0.0639
0.0740
0.0813
0.0858 <-- here should be another peak
0.0858 <--
0.0812
0.0719
0.0600
0.0473
0.0353
0.0239
0.0151
0.0083
0.0034
-0.0001
-0.0025
-0.0043
-0.0057
-0.0048
-0.0038
-0.0026
0.0007
0.0043
0.0062
0.0083
0.0106
0.0111
0.0116
0.0102
0.0089
0.0057
0.0025
-0.0025
-0.0056
Now the findpeaks function only finds one peak:
>> [pks loc] = findpeaks(values)
pks =
0.0116
loc =
42
If I plot the data, it becomes obvious that findpeaks misses one of the peaks at the location 18/19 because they both have the value 0.08579.
What is the best way to find those missing peaks?
If you have the image processing toolbox, you can use IMREGIONALMAX to find the peaks, after which you can use regionprops to find the center of the regions (if that's what you need), i.e.
bw = imregionalmax(signal);
peakLocations = find(bw); %# returns n peaks for an n-tuple of max-values
stats = regionprops(bw,'Centroid');
peakLocations = cat(1,stats.Centroid); %# returns the center of the n-tuple of max-values
This is an old topic, but maybe some are still looking for an easier solution to this (like I did today):
You could also just substract some very small fixed value from all values on a plateau, except from the first value. This causes each first value on a plateau to always be the highest on the respective plateaus, causing them to be included as peaks.
Just make something like this part of your code:
peaks = yourdata;
verysmallvalue = .001;
plateauvalue = peaks(1);
for i = 2:size(peaks,1)
if peaks(i) == plateauvalue
peaks(i) = peaks(i) - verysmallvalue;
else
plateauvalue = peaks(i);
end
end
[PKS,LOCS] = findpeaks(peaks);
plot(yourdata);
hold on;
plot(LOCS, yourdata(LOCS), 'Color', 'Red', 'Line', 'None', 'Marker', 'o');
Hope this helps!
Use the second derivative test instead?
I ended up writing my own simpler version of findpeaks, which seems to work for my purpose.
function [pks,locs] = my_findpeaks(X)
M = numel(X);
pks = [];
locs = [];
if (M < 4)
datamsgid = generatemsgid('emptyDataSet');
error(datamsgid,'Data set must contain at least 4 samples.');
else
for idx=1:M-3
if X(idx)< X(idx+1) && X(idx+1)>=X(idx+2) && X(idx+2)> X(idx+3)
pks = [pks X(idx)];
locs = [locs idx];
end
end
end
end
Edit: To clarify, the problem arose, when I had a peak which was exactly between two sample points and those two sample points had coincidentally the same value. It only happend a couple of times in more than 10.000 cases.
The behavior that you describe is a known bug in versions of MATLAB prior to R2010b. The minimum example is
findpeaks([0 1 1 0])
which returns [], while
findpeaks([0 1 0])
returns the (position of the) peak.
The bug has been fixed in R2010b and later, see the official Bug Report. With that fix, findpeaks returns the rising edge of "peaks with repeated values" (which I would call plateaus).