Storing and accessing a piecewise interpolant using a single handle - matlab

I have a dataset containing two vectors of points, X and Y that represents measurements of an "exponential-like" phenomenon (i.e. Y = A*exp(b*x)). When fitting it with an exponential equation I'm getting a nice-looking fit, but when I'm using it to compute things it turns out that the fit is not quite as accurate as I would hope.
Currently my most promising idea is a piecewise exponential fit (taking about 6 (x,y) pairs each time), which seems to provide better results in cases I tested manually. Here's a diagram to illustrate what I mean to do:
// Assuming a window of size WS=4:
- - - - - - - - - - - - //the entire X span of the data
[- - - -]- - // the fit that should be evaluated for X(1)<= x < X(floor(WS/2)+1)
-[- - - -]- // the fit that should be evaluated for X(3)<= x < X(4)
...
- - - - - -[- - - -]- - // X(8)<= x < X(9)
... //and so on
Some Considerations:
I considered filtering the data before fitting, but this is tricky since I don't really know anything about the type of noise it contains.
I would like the piecewise fit (including all different cases) to be accessible using a single function handle. It's very similar to MATLAB's Shape-preserving "PCHIP" interpolant, only that it should use an exponential function instead.
The creation of the fit does not need to happen during the runtime of another code. I even prefer to create it separately.
I'm not worried about the potential unsmoothness of the final function.
The only way of going about this I could think of is defining an .m file as explained in Fit a Curve Defined by a File, but that would require manually writing conditions for almost as many cases as I have points (or obviously write a code that generates this code for me, which is also a considerable effort).
Relevant code snippets:
clear variables; clc;
%% // Definitions
CONSTS.N_PARAMETERS_IN_MODEL = 2; %// For the model y = A*exp(B*x);
CONSTS.WINDOW_SIZE = 4;
assert(~mod(CONSTS.WINDOW_SIZE,2) && CONSTS.WINDOW_SIZE>0,...
'WINDOW_SIZE should be a natural even number.');
%% // Example Data
X = [0.002723682418630,0.002687088539567,0.002634005004610,0.002582978173834,...
0.002530684550171,0.002462144527884,0.002397219225698,0.002341097974950,...
0.002287544321171,0.002238889510803]';
Y = [0.178923076435990,0.213320004768074,0.263918364216839,0.324208349386613,...
0.394340431220509,0.511466688684182,0.671285738221314,0.843849959919278,...
1.070756756433334,1.292800046096531]';
assert(CONSTS.WINDOW_SIZE <= length(X),...
'WINDOW_SIZE cannot be larger than the amount of points.');
X = flipud(X); Y = flipud(Y); % ascending sorting is needed later for histc
%% // Initialization
nFits = length(X)-CONSTS.WINDOW_SIZE+1;
coeffMat(nFits,CONSTS.N_PARAMETERS_IN_MODEL) = 0; %// Preallocation
ft = fittype( 'exp1' );
opts = fitoptions( 'Method', 'NonlinearLeastSquares' );
%% // Populate coefficient matrix
for ind1 = 1:nFits
[xData, yData] = prepareCurveData(...
X(ind1:ind1+CONSTS.WINDOW_SIZE-1),Y(ind1:ind1+CONSTS.WINDOW_SIZE-1));
%// Fit model to data:
fitresult = fit( xData, yData, ft, opts );
%// Save coefficients:
coeffMat(ind1,:) = coeffvalues(fitresult);
end
clear ft opts ind1 xData yData fitresult ans
%% // Get the transition points
xTrans = [-inf; X(CONSTS.WINDOW_SIZE/2+1:end-CONSTS.WINDOW_SIZE/2); inf];
At this point, xTrans and coeffMat contain all the required information to evaluate the fits. To show this we can look at a vector of some test data:
testPts = [X(1), X(1)/2, mean(X(4:5)), X(CONSTS.WINDOW_SIZE)*1.01,2*X(end)];
...and finally the following should roughly happen internally within the handle:
%% // Get the correct fit# to be evaluated:
if ~isempty(xTrans) %// The case of nFits==1
rightFit = find(histc(testPts(3),xTrans));
else
rightFit = 1;
end
%% // Actually evaluate the right fit
f = #(x,A,B)A*exp(B*x);
yy = f(testPts(3),coeffMat(rightFit,1),coeffMat(rightFit,2));
And so my problem is how to hold that last bit (along with all the fit coefficients) inside a single handle, that accepts an arbitrarily-sized input of points to interpolate on?
Related resources:
stackoverflow.com/questions/16777921/matlab-curve-fitting-exponential-vs-linear/

It's not all clear but why not to puts things into a class ?
classdef Piecewise < handle
methods
% Construction
function [this] = Piecewise(xmeas, ymeas)
... here compute xTrans and coeffMat...
end
% Interpolation
function [yinterp] = Evaluate(xinterp)
... Here use previously computed xTrans and coeffMat ...
end
end
properties(SetAccess=Private, GetAccess=Private)
xTrans;
coeffMat;
end
end
In this way you can prcompute xTrans vector and coeffMat matrix during construction and then later reuse these properties when you need to evaluate interpolant at xinterp values in Evaluate method.
% The real measured data
xmeas = ...
ymeas = ...
% Build piecewise interpolation object
piecewise = Piecewise(x,meas, ymeas);
% Rebuild curve at any new positions
xinterp = ...
yinterp = piecewise.Evaluate(xinterp);
Function like syntax
If you truly prefer to have function-handle like syntax, you can still internally use above Piecewise object and add extra static method to return it as a function handle:
classdef Piecewise < handle
... see code above...
methods(Static=true)
function [f] = MakeHandle(xmeas, ymeas)
%[
obj = Piecewise(xmeas, ymeas);
f = #(x)obj.Evaluate(x);
%]
end
end
This can be used like this:
f = Piecewise.MakeHandle(xmeas, ymeas);
yinterp = f(xinterp);
PS1: You can later put Evaluate and Piecewise constructor methods as private if you absolutely wanna to force this syntax.
PS2: To fully hide object oriented design, you can turn MakeHandle into a fully classic routine (will work the same as if static and users won't have to type Piecewise. in front of MakeHandle).
A last solution without oop
Put everything in a single .m file :
function [f] = MakeHandle(xmeas, ymeas)
... Here compute xTrans and coeffMat ...
f = #(x)compute(x, xTrans, coeffMat);% Passing xTrans/coeffMatt as hidden parameters
end
function [yinterp] = compute(xinterp, xTrans, coeffMat)
... do interpolation here...
end

As an extension of CitizenInsane's answer, the following is an alternative approach that allows a "handle-y" access to the inner Evaluate function.
function b = subsref(this,s)
switch s(1).type
case '()'
xval = s.subs{:};
b = this.Evaluate(xval);
otherwise %// Fallback to the default behavior for '.' and '{}':
builtin('subsref',this,s);
end
end
References: docs1, docs2, docs3, question1
P.S. docs2 is referenced because I initially tried to do subsref#handle (which is calling the superclass method, as one would expect in OOP when overriding methods in a subclass), but this doesn't work in MATLAB, which instead requires builtin() to achieve the same functionality.

Related

How to use the integral function with function handles in Matlab

My task is to calculate the area between two curves (named 'water retention curves'). I tried to calculate this area with the build-in 'integral' function (see line 32), but the command window tells me the output of the function must be of the same length as the input of the function. As you can see, I define my functions as function handles (which is asked). I'm new to this method, so I expect my mistake to be in the definition of my function handles.
t_r = 0.045;
t_s = 0.43;
alfa = 0.145;
n =2.68;
z=-200:1:0;
% Calculate theta_150 and theta_200 with waterretentioncurve
% function (see below) depending on h_150 and h_200 (which both
% depend on z):
grondw_200 = -200;
h_200 = grondw_200 - z;
theta_200 = #(h_200) waterretentioncurve(t_r,t_s,alfa,n,h_200);
grondw_150 = -150;
h_150 = grondw_150 - z;
theta_150 = #(h_150) waterretentioncurve(t_r,t_s,alfa,n,h_150);
% Calculate the area between these functions. We substract the lower
% function from the upper function, which gives a function
% deltatheta_200_150 and is easy to integrate.
% Calculate deltatheta200_150 = theta_150 - theta_200 (with
% input variables depending on h_200 and h_150 (depending on z)):
deltatheta_200_150= #(h_150,h_200) theta_150(h_150) - theta_200(h_200);
% Problem calculating the integral of function deltatheta_200_150. The
% command window tells me the function output must be the same size as
% the function input. How do I solve this issue?
deltastorage_200_150 = integral(#(deltatheta_200_150) theta_150(h_150_sand) - theta_200(h_200),-200,0)
function th = waterretentioncurve(th_r,th_s,alfa,n,h)
h(h>0)=0;
th = th_r + (th_s-th_r)./(1+(alfa.*abs(h)).^n).^(1-1/n);
end
Thanks in advance!
I tried to adjust the definition of my function handles in the integral-function, but this made the output in the command window look worse :)

Error lsqnonlin reaching values using nested functions

I want to fit two parameters using lsqnonlin. I have set up my system of ODE and want to solve them with my new parameters that are found after I performed the lsqnonlin.
function [dcdt] = CSTRSinSeries(t,c,par)
for i=1:par.n
if i==1
dcdt(i) = 1/par.tau_per_tank * (par.c0-c(i))+par.kf*(par.c0_meth-c(i))- ...
par.kb*c(i).^2;
else
dcdt(i) = 1/par.tau_per_tank * (c(i-1)-c(i))+par.k*(par.c0_meth-c(i))- ...
par.kb*c(i).^2;
end
end
dcdt = dcdt';
end
% fitcrit function
function error = fitcrit(curve_fit_parameters,time_exp,conc_exp, par, init)
[time_model_fit,conc_model_fit] = ode45(#(t,c) CSTRSinSeries(t,c,par),time_exp,init,[]);
error = (conc_exp-conc_model_fit);
end
I think the problem has to do with the fact that my parameters are in parameter struct par and that I don't want to fit on all of these parameters, but just on basis of two of those.
This is my main script for performing the curve fitting:
% initial guesses for model parameters, no. of indeces is number of fitted % parameters
k0 = [0.028 0.002];
% lower and upper bounds for model parameters, this can be altered
LB = [0.00 0.00];
UB = [Inf Inf];
% Set up fitting options
options = optimset('TolX',1.0E-6,'MaxFunEvals',1000);
% Perform nonlinear least squares fit (note that we store much more
% statistics than just the final fitted parameters)
[curve_fit_parameters,RESNORM,RESIDUAL,EXITFLAG,OUTPUT,LAMBDA,JACOBIAN] = ...
lsqnonlin(#(k) fitcrit(k,time_exp, conc_exp, par, init),k0,LB,UB,options);
The code now does not give an error, however the output of my curve_fit_parameters is now the same as my initial values (also when I change my initial value, it stays the same).
The error is:
>> PackedBed
Initial point is a local minimum.
Optimization completed because the size of the gradient at the initial point
is less than the default value of the optimality tolerance.
<stopping criteria details>
The stopping criteria gives a relative first-order optimality 0.00+00, so I think that the parameters lsqnonlin changes have no influence on my error.
I think that the mistake is the lsqnonlin function, where I refer to 'k' instead of 'par.kb and par.kf'. However, I don't know how to refer to these as these are nested functions. Replacing 'k' by 'par.kb, par.kf' gives me the error: Unexpected matlab operator.
Could anyone help me with my problem?
As suggested by Adriaan in the comments, you can easily make a copy of the par struct in your fitcrit function, and assign the parameters to optimize to this par_tmp struct. The fitcrit function then becomes:
function error = fitcrit(curve_fit_parameters,time_exp,conc_exp, par, init)
par_tmp = par;
par_tmp.kf = curve_fit_parameters(1); % change to parameters you need to fit
par_tmp.kb = curve_fit_parameters(2);
[time_model_fit,conc_model_fit] = ode45(#(t,c) odefun(t,c,par_tmp),time_exp,init,[]); % pass par_tmp to ODE solver
error = (conc_exp-conc_model_fit);
end
In the script calling lsqnonlin, you will have to change the par struct to include the converged solution of the parameter subset to optimize:
% initial guesses for model parameters, no. of indeces is number of fitted % parameters
k0 = [0.028 0.002];
% lower and upper bounds for model parameters, this can be altered
LB = [0.00 0.00];
UB = [Inf Inf];
% Set up fitting options
options = optimset('TolX',1.0E-6,'MaxFunEvals',1000);
% Perform nonlinear least squares fit (note that we store much more
% statistics than just the final fitted parameters)
[curve_fit_parameters,RESNORM,RESIDUAL,EXITFLAG,OUTPUT,LAMBDA,JACOBIAN] = ...
lsqnonlin(#(k) fitcrit(k,time_exp, conc_exp, par, init),k0,LB,UB,options);
% make par_final
par_final = par;
par_final.kf = curve_fit_parameters(1);
par_final.kb = curve_fit_parameters(2);
```

Optimization by perturbing variable

My main script contains following code:
%# Grid and model parameters
nModel=50;
nModel_want=1;
nI_grid1=5;
Nth=1;
nRow.Scale1=5;
nCol.Scale1=5;
nRow.Scale2=5^2;
nCol.Scale2=5^2;
theta = 90; % degrees
a_minor = 2; % range along minor direction
a_major = 5; % range along major direction
sill = var(reshape(Deff_matrix_NthModel,nCell.Scale1,1)); % variance of the coarse data matrix of size nRow.Scale1 X nCol.Scale1
%# Covariance computation
% Scale 1
for ihRow = 1:nRow.Scale1
for ihCol = 1:nCol.Scale1
[cov.Scale1(ihRow,ihCol),heff.Scale1(ihRow,ihCol)] = general_CovModel(theta, ihCol, ihRow, a_minor, a_major, sill, 'Exp');
end
end
% Scale 2
for ihRow = 1:nRow.Scale2
for ihCol = 1:nCol.Scale2
[cov.Scale2(ihRow,ihCol),heff.Scale2(ihRow,ihCol)] = general_CovModel(theta, ihCol/(nCol.Scale2/nCol.Scale1), ihRow/(nRow.Scale2/nRow.Scale1), a_minor, a_major, sill/(nRow.Scale2*nCol.Scale2), 'Exp');
end
end
%# Scale-up of fine scale values by averaging
[covAvg.Scale2,var_covAvg.Scale2,varNorm_covAvg.Scale2] = general_AverageProperty(nRow.Scale2/nRow.Scale1,nCol.Scale2/nCol.Scale1,1,nRow.Scale1,nCol.Scale1,1,cov.Scale2,1);
I am using two functions, general_CovModel() and general_AverageProperty(), in my main script which are given as following:
function [cov,h_eff] = general_CovModel(theta, hx, hy, a_minor, a_major, sill, mod_type)
% mod_type should be in strings
angle_rad = theta*(pi/180); % theta in degrees, angle_rad in radians
R_theta = [sin(angle_rad) cos(angle_rad); -cos(angle_rad) sin(angle_rad)];
h = [hx; hy];
lambda = a_minor/a_major;
D_lambda = [lambda 0; 0 1];
h_2prime = D_lambda*R_theta*h;
h_eff = sqrt((h_2prime(1)^2)+(h_2prime(2)^2));
if strcmp(mod_type,'Sph')==1 || strcmp(mod_type,'sph') ==1
if h_eff<=a
cov = sill - sill.*(1.5*(h_eff/a_minor)-0.5*((h_eff/a_minor)^3));
else
cov = sill;
end
elseif strcmp(mod_type,'Exp')==1 || strcmp(mod_type,'exp') ==1
cov = sill-(sill.*(1-exp(-(3*h_eff)/a_minor)));
elseif strcmp(mod_type,'Gauss')==1 || strcmp(mod_type,'gauss') ==1
cov = sill-(sill.*(1-exp(-((3*h_eff)^2/(a_minor^2)))));
end
and
function [PropertyAvg,variance_PropertyAvg,NormVariance_PropertyAvg]=...
general_AverageProperty(blocksize_row,blocksize_col,blocksize_t,...
nUpscaledRow,nUpscaledCol,nUpscaledT,PropertyArray,omega)
% This function computes average of a property and variance of that averaged
% property using power averaging
PropertyAvg=zeros(nUpscaledRow,nUpscaledCol,nUpscaledT);
%# Average of property
for k=1:nUpscaledT,
for j=1:nUpscaledCol,
for i=1:nUpscaledRow,
sum=0;
for a=1:blocksize_row,
for b=1:blocksize_col,
for c=1:blocksize_t,
sum=sum+(PropertyArray((i-1)*blocksize_row+a,(j-1)*blocksize_col+b,(k-1)*blocksize_t+c).^omega); % add all the property values in 'blocksize_x','blocksize_y','blocksize_t' to one variable
end
end
end
PropertyAvg(i,j,k)=(sum/(blocksize_row*blocksize_col*blocksize_t)).^(1/omega); % take average of the summed property
end
end
end
%# Variance of averageed property
variance_PropertyAvg=var(reshape(PropertyAvg,...
nUpscaledRow*nUpscaledCol*nUpscaledT,1),1,1);
%# Normalized variance of averageed property
NormVariance_PropertyAvg=variance_PropertyAvg./(var(reshape(...
PropertyArray,numel(PropertyArray),1),1,1));
Question: Using Matlab, I would like to optimize covAvg.Scale2 such that it matches closely with cov.Scale1 by perturbing/varying any (or all) of the following variables
1) a_minor
2) a_major
3) theta
I am aware I can use fminsearch, however, how I am not able to perturb the variables I want to while using this fminsearch.
I won't pretend to understand everything that you are doing. But it sounds like a typical minimization problem. What you want to do is to come up with a single function that takes a_minor, a_major and theta as arguments, and returns the square of the difference between covAvg.Scale2 and cov.Scale1. Something like this:
function diff = minimize_me(a_minor, a_major, theta)
... your script goes here
diff = (covAvg.Scale2 - cov.Scale1)^2;
end
Then you need matlab to minimize this function. There's more than one option here. Since you only have three variables to minimize over, fminsearch is a good place to start. You would call it something like this:
opts = optimset('display', 'iter');
x = fminsearch( #(x) minimize_me(x(1), x(2), x(3)), [a_minor_start a_major_start theta_start], opts)
The first argument to fminsearch is the function you want to optimize. It must take a single argument: a vector of the variables that will be perturbed in order to find the minimum value. Here I use an anonymous function to extract the values from this vector and pass them into minimize_me. The second argument to fminsearch is a vector containing the values to start searching at. The third argument are options that affect the search; it's a good idea to set display to iter when you first start optimizing, so that you can get an idea of well the optimizer is converging.
If your parameters have restricted domains (e.g. they must all be positive) take a look at fminsearchbnd on the file exchange.
If I have misunderstood your problem, and this doesn't help at all, try posting code that we can run to reproduce the problem ourselves.

How can I plot data to a “best fit” cos² graph in Matlab?

I’m currently a Physics student and for several weeks have been compiling data related to ‘Quantum Entanglement’. I’ve now got to a point where I have to plot my data (which should resemble a cos² graph - and does) to a sort of “best fit” cos² graph. The lab script says the following:
A more precise determination of the visibility V (this is basically how 'clean' the data is) follows from the best fit to the measured data using the function:
f(b) = A/2[1-Vsin(b-b(center)/P)]
Granted this probably doesn’t mean much out of context, but essentially A is the amplitude, b is an angle and P is the periodicity. Hence this is also a “wave” like the experimental data I have found.
From this I understand, as previously mentioned, I am making a “best fit” curve. However, I have been told that this isn’t possible with Excel and that the best approach is Matlab.
I know intermediate JavaScript but do not know Matlab and was hoping for some direction.
Is there a tutorial I can read for this? Is it possible for someone to go through it with me? I really have no idea what it entails, so any feed back would be greatly appreciated.
Thanks a lot!
Initial steps
I guess we should begin by getting a representation in Matlab of the function that you're trying to model. A direct translation of your formula looks like this:
function y = targetfunction(A,V,P,bc,b)
y = (A/2) * (1 - V * sin((b-bc) / P));
end
Getting hold of the data
My next step is going to be to generate some data to work with (you'll use your own data, naturally). So here's a function that generates some noisy data. Notice that I've supplied some values for the parameters.
function [y b] = generateData(npoints,noise)
A = 2;
V = 1;
P = 0.7;
bc = 0;
b = 2 * pi * rand(npoints,1);
y = targetfunction(A,V,P,bc,b) + noise * randn(npoints,1);
end
The function rand generates random points on the interval [0,1], and I multiplied those by 2*pi to get points randomly on the interval [0, 2*pi]. I then applied the target function at those points, and added a bit of noise (the function randn generates normally distributed random variables).
Fitting parameters
The most complicated function is the one that fits a model to your data. For this I use the function fminunc, which does unconstrained minimization. The routine looks like this:
function [A V P bc] = bestfit(y,b)
x0(1) = 1; %# A
x0(2) = 1; %# V
x0(3) = 0.5; %# P
x0(4) = 0; %# bc
f = #(x) norm(y - targetfunction(x(1),x(2),x(3),x(4),b));
x = fminunc(f,x0);
A = x(1);
V = x(2);
P = x(3);
bc = x(4);
end
Let's go through line by line. First, I define the function f that I want to minimize. This isn't too hard. To minimize a function in Matlab, it needs to take a single vector as a parameter. Therefore we have to pack our four parameters into a vector, which I do in the first four lines. I used values that are close, but not the same, as the ones that I used to generate the data.
Then I define the function I want to minimize. It takes a single argument x, which it unpacks and feeds to the targetfunction, along with the points b in our dataset. Hopefully these are close to y. We measure how far they are from y by subtracting from y and applying the function norm, which squares every component, adds them up and takes the square root (i.e. it computes the root mean square error).
Then I call fminunc with our function to be minimized, and the initial guess for the parameters. This uses an internal routine to find the closest match for each of the parameters, and returns them in the vector x.
Finally, I unpack the parameters from the vector x.
Putting it all together
We now have all the components we need, so we just want one final function to tie them together. Here it is:
function master
%# Generate some data (you should read in your own data here)
[f b] = generateData(1000,1);
%# Find the best fitting parameters
[A V P bc] = bestfit(f,b);
%# Print them to the screen
fprintf('A = %f\n',A)
fprintf('V = %f\n',V)
fprintf('P = %f\n',P)
fprintf('bc = %f\n',bc)
%# Make plots of the data and the function we have fitted
plot(b,f,'.');
hold on
plot(sort(b),targetfunction(A,V,P,bc,sort(b)),'r','LineWidth',2)
end
If I run this function, I see this being printed to the screen:
>> master
Local minimum found.
Optimization completed because the size of the gradient is less than
the default value of the function tolerance.
A = 1.991727
V = 0.979819
P = 0.695265
bc = 0.067431
And the following plot appears:
That fit looks good enough to me. Let me know if you have any questions about anything I've done here.
I am a bit surprised as you mention f(a) and your function does not contain an a, but in general, suppose you want to plot f(x) = cos(x)^2
First determine for which values of x you want to make a plot, for example
xmin = 0;
stepsize = 1/100;
xmax = 6.5;
x = xmin:stepsize:xmax;
y = cos(x).^2;
plot(x,y)
However, note that this approach works just as well in excel, you just have to do some work to get your x values and function in the right cells.

Plotting graph error (values not showign up)

How do I plot the value of Approximation - Answer as s varies in the code below? If you look at my code below, you can see the method I used (I put it in a separate file).
However, it does not show me a graph from 1 to 1000. Instead the graph is from 999 to 1001 and does not have any points on it.
for s = 1:1000
error = LaplaceTransform(s,5) - (antiderivative(1,s)-antiderivative(0,s));
end
plot(s,error);
title('Accuracy of Approximation');
xlabel('s');
ylabel('Approximation - Exact Answer');
The functions used:
function g = LaplaceTransform(s,N);
% define function parameters
a=0;
b=1;
h=(b-a)/N;
x = 0:h:1;
% define function
g = ff(x).*exp(-s*x);
% compute the exact answer of the integral
exact_answer=antiderivative(b,s)-antiderivative(a,s)
% compute the composite trapezoid sum
If=0;
for i=1:(N-1)
If=If+g(i).*h;
end;
If=If+g(1).*h/2+g(N).*h/2;
If
with
function fx=ff(x)
fx=x;
and
function fx=antiderivative(x,s);
fx= (-exp(-s*x)*(s*x+1))/(s^2);
Any help would be appreciated. Thanks.
The following
for s = 1:1000
error = LaplaceTransform(s,5) - (antiderivative(1,s)-antiderivative(0,s));
end
plot(s,error);
already has several issues. The two main ones are that error is getting overwritten at each iteration, as #Amro has pointed out, and that s, your loop variable, is a scalar.
Thus, you need to write
difference = zeros(1000,1); %# preassignment is good for you
for s = 1:1000
difference(s) = LaplaceTransform(s,5) - (antiderivative(1,s)-antiderivative(0,s));
end
plot(1:1000,difference);
There is another error in the LaplaceTransform function
function g = LaplaceTransform(s,N);
[...]
g = ff(x).*exp(-s*x); %# g is an array
[...]
If %# If is calculated, but not returned.
I assume you want to write
function If = LaplaceTransform(s,N);
instead, because otherwise, you try to assign the array g to the scalar difference(s).