I am trying to implement a routine in mathematica/matlab for a stochastic process. Any code written here is for mathematica, but if someone can help me with encoding this in matlab (if they're more familiar with that) then that would be fine as well. However, mathematica is the priority if possible.
I will state what I want to get after going over the equations first.
The following is the Ito stochastic Process that I am interested in (where z(t)=[x(t),y(t)]:
with the following quantities:
We also have the following (from now I will type x(t),y(t) as x,y):
where, is the first exit time and
GT(x) is a smooth function of x (please see at the end)
Goal: I want to get Q0 in terms of x and only.
===============================================================================
To find the first exit time maybe the following can be used (courtesy of b.gatessucks). Please note that the following is a mathematica code.
CONSTRAINT FOR EXIT TIME:
const[x_, y_] := And[10^-8 <= y <= 10^-3, 0.9*(Uc) <= a1*x^2/2 - a3*x^4/4 <= 1.1*(Uc)]
x0 = 0.1; (* starting point for x[t] *) y0 = 0.1; (* starting point for y[t] *)
proc = ItoProcess[ {\[DifferentialD]x[t] == y[t] \[DifferentialD]t, \[DifferentialD]y[t] == (-G*y[t] - (a1*x[t] - a3*x[t]^3) - eps*b3b*y[t]^3) \[DifferentialD]t + Sqrt[2*eps*G] \[DifferentialD]w[t]}, {t, x[t], y[t], Boole[const[x[t], y[t]]]}, {{x, y}, {x0, y0}}, {t, 0}, w \[Distributed] WienerProcess[]]
Exit time is found:
SeedRandom[3]
sim = RandomFunction[proc, {0, 1, 0.001}];
First#Select[sim[[2, 1, 1]], #[[4]] == 0 &]
that outputs {t, x[t], y[t], Boole[const[x[t], y[t]]]}
I need to be able to use the above code to find the exit time and then use it in Q0. The above snippet to find exit time produces non zero times for properly chosen .
================================================================================
Numerical Task that needs to be set up to find Q0(x):
--> We start from the integral term in this expression. First find 's for many initial conditions (say 100 for now -- i.e., 100 exit times) starting from inside the domain (maybe by using the snippet already mentioned in this post). Now the integral can be evaluated for each of the 100 exit times as functions of x and .
--> Now, the expectation of the integral in Q0 is a sample average (by Law of Large Numbers) of all the 100 integrals evaluated before. Thus, Q0 can be found and it should only be a function of x and .
================================================================================
My issues:
The code above for the exit time only seems to produce non-zero exit times for appropriately chosen initial conditions. If someone can elucidate as to how to pick the appropriate such that there will be sufficient exit time being produced then that would be appreciative. I really want to keep my constraints as specified above in const[x_, y_], but if there seems to be no hope in finding tractable results then I wouldn't mind relaxing it.
The code below for GT(x) results in singularities -- I received error message from DSolve regarding indeterminate expressions....both b0[xc] and DrhoDy[xc] become indeterminate and so the DSolve gives problems when using the initial condition GT[xc] as it becomes indeterminate as well...any way around this matter would be gladly appreciated.
Finally, I really need someone's help in evaluating the 100 integrals in mathematica efficiently (since the terms are huge) for each of the exit times found previously and taking the expectation. I am not sure as to how to find the exit times properly.
================================================================================
To find GT(x):
b1b = 0.9; b3b = .8; a1b = 0.1; a3b = 0.2; G = (1/0.1^2)*
b1b; a1 = (1/.1^2)*a1b; a3 = (1/.1^2)*a3b; xc = Sqrt[a1/a3]; Uc =
a1*xc^2/2 - a3*xc^4/4; L = (G + Sqrt[G^2 + 4*(-(a1 - 3*a3*xc^2))])/2;
y[x_] := (x *a1 - x^3*a3)/(G + L)
b0[x_] := (y[
x]*(a1*x \[Minus]
a3*x ^3)*(1 \[Minus] (a1 \[Minus] 3*a3*x ^2)) \[Minus]
G*y[x]*(a1 \[Minus] 3*a3*x ^2)) /(y[
x] ^2 + (\[Minus]G*y[x] + a1*x \[Minus] a3*x ^3) ^2)
DrhoDy [x_] :=
Sqrt[y[x]^ 2 /(y[x] ^2 + (\[Minus]G*y[x] + a1*x \[Minus] a3*x ^3) ^2)]
linearequation =
y[x]*GT '[x] + b0[x]*GT[x] == G*DrhoDy [x]^2 *GT[x]^ 3;
GT[x] =
DSolve[{linearequation, GT[xc] == Sqrt[b0[xc]/( G*DrhoDy [xc]^2 )]},
GT, x]
ODE:
that satisfies the Initial Condition:
where,
and
Related
I noticed that for example the log(x) function returns slightly different values when called with vectors of different sizes in MATLAB.
Here is a minimal working example:
x1 = 0.1:0.1:1;
x2 = 0.1:0.1:1.1;
y1 = log(x1);
y2 = log(x2);
d = y1 - y2(1:length(x1));
d(7)
Executing returns:
>> ans =
-1.6653e-16
The behaviour seems to start when the vector becomes greater than 10 entries.
Although the difference is very small, when being scaled by a lot of operations using large vectors, the errors became big enough to notice.
Does anyone have an idea what is happening here?
The differences exist in x1 and x2 and those errors are propagated and potentially accentuated by log.
max(abs(x1 - x2(1:numel(x1))))
% 1.1102e-16
This is due to the inability of floating point number to represent your data exactly. See here for more information.
Per Suever’s answer, this is because for unfathomable reasons, Matlab’s colon operator [start : step : stop] with floating-point step produces non-bit-exact results even when start and step are the same, and only stop is different.
This is wrong, although it’s not unknown: in a blog post from 2006 (search for “Typical MATLAB Pitfall”), Loren notes that : colon operator can suffer from floating-point accuracy issues.
Numpy/Python does it right:
import numpy as np
np.all(np.arange(0.1,1.0+1e-4, 0.1) == np.arange(0.1, 1.1+1e-4, 0.1)[:-1]) # => True
(np.arange(start, stop, step) doesn’t include stop so I use stop+1e-4 above.)
Julia does it right too:
all(collect(0.1 : 0.1 : 1) .== collect(0.1 : 0.1 : 1.1)[1:10]) # => true
Alternative. Here’s a straightforward guess as to what Numpy’s arange is doing, in Matlab:
function y = arange(start, stop, step)
%ARANGE An alternative to Matlab's colon operator
%
% The API for this function follows Numpy's arange [1].
%
% ARANGE(START, STOP, STEP) produces evenly-spaced values within the half-open
% interval [START, STOP). The resulting vector has CEIL((STOP - START) / STEP)
% elements and is roughly equivalent to (START : STEP : STOP - STEP / 2), but
% may differ from this COLON-based version due to numerical differences.
%
% ARANGE(START, STOP) assumes STEP of 1.0.
%
% [1] http://docs.scipy.org/doc/numpy/reference/generated/numpy.arange.html
if nargin < 3 || isempty(step)
step = 1.0;
end
len = ceil((stop - start) / step);
y = start + (0 : len - 1) * step;
This function tries to keep things exact until the last possible moment, when it applies the scaling by step and shifting by start. With this, your original two vectors are bit-exact over their shared interval:
y1 = arange(0.1, 1.0 + 1e-4, 0.1);
y2 = arange(0.1, 1.1 + 1e-4, 0.1);
all(y2(1:numel(y1)) == y1) % => 1
And therefore all downstream operations like log are also bit-exact.
I will investigate whether this bug in Matlab is causing any problems in our internal code and check if we should enforce using linspace (which I believe, but have not checked, does not suffer as much from accuracy issues) or something like arange above instead of : for floating-point steps. (arange also can be tricky because, as the docs note, depending on (stop-start)/step, you may get a vector whose last element is greater than stop sometimes—those same docs also recommend using linspace with non-unit steps.)
I wanna find a root for the following function with an error less than 0.05%
f= 3*x*tan(x)=1
In the MatLab i've wrote that code to do so:
clc,close all
syms x;
x0 = 3.5
f= 3*x*tan(x)-1;
df = diff(f,x);
while (1)
x1 = 1 / 3*tan(x0)
%DIRV.. z= tan(x0)^2/3 + 1/3
er = (abs((x1 - x0)/x1))*100
if ( er <= 0.05)
break;
end
x0 = x1;
pause(1)
end
But It keeps running an infinite loop with error 200.00 I dunno why.
Don't use while true, as that's usually uncalled for and prone to getting stuck in infinite loops, like here. Simply set a limit on the while instead:
while er > 0.05
%//your code
end
Additionally, to prevent getting stuck in an infinite loop you can use an iteration counter and set a maximum number of iterations:
ItCount = 0;
MaxIt = 1e5; %// maximum 10,000 iterations
while er > 0.05 & ItCount<MaxIt
%//your code
ItCount=ItCount+1;
end
I see four points of discussion that I'll address separately:
Why does the error seemingly saturate at 200.0 and the loop continue infinitely?
The fixed-point iterator, as written in your code, is finding the root of f(x) = x - tan(x)/3; in other words, find a value of x at which the graphs of x and tan(x)/3 cross. The only point where this is true is 0. And, if you look at the value of the iterants, the value of x1 is approaching 0. Good.
The bad news is that you are also dividing by that value converging toward 0. While the value of x1 remains finite, in a floating point arithmetic sense, the division works but may become inaccurate, and er actually goes NaN after enough iterations because x1 underflowed below the smallest denormalized number in the IEEE-754 standard.
Why is er 200 before then? It is approximately 200 because the value of x1 is approximately 1/3 of the value of x0 since tan(x)/3 locally behaves as x/3 a la its Taylor Expansion about 0. And abs(1 - 3)*100 == 200.
Divisions-by-zero and relative orders-of-magnitude are why it is sometimes best to look at the absolute and relative error measures for both the values of the independent variable and function value. If need be, even putting an extremely (relatively) small finite, constant value in the denominator of the relative calculation isn't entirely a bad thing in my mind (I remember seeing it in some numerical recipe books), but that's just a band-aid for robustness's sake that typically hides a more serious error.
This convergence is far different compared to the Newton-Raphson iterations because it has absolutely no knowledge of slope and the fixed-point iteration will converge to wherever the fixed-point is (forgive the minor tautology), assuming it does converge. Unfortunately, if I remember correctly, fixed-point convergence is only guaranteed if the function is continuous in some measure, and tan(x) is not; therefore, convergence is not guaranteed since those pesky poles get in the way.
The function it appears you want to find the root of is f(x) = 3*x*tan(x)-1. A fixed-point iterator of that function would be x = 1/(3*tan(x)) or x = 1/3*cot(x), which is looking for the intersection of 3*tan(x) and 1/x. However, due to point number (2), those iterators still behave badly since they are discontinuous.
A slightly different iterator x = atan(1/(3*x)) should behave a lot better since small values of x will produce a finite value because atan(x) is continuous along the whole real line. The only drawback is that the domain of x is limited to the interval (-pi/2,pi/2), but if it converges, I think the restriction is worth it.
Lastly, for any similar future coding endeavors, I do highly recommend #Adriaan's advice. If would like a sort of compromise between the styles, most of my iterative functions are written with a semantic variable notDone like this:
iter = 0;
iterMax = 1E4;
tol = 0.05;
notDone = 0.05 < er & iter < iterMax;
while notDone
%//your code
iter = iter + 1;
notDone = 0.05 < er & iter < iterMax;
end
You can add flags and all that jazz, but that format is what I frequently use.
I believe that the code below achieves what you are after using Newton's method for the convergence. Please leave a comment if I have missed something.
% find x: 3*x*tan(x) = 1
f = #(x) 3*x*tan(x)-1;
dfdx = #(x) 3*tan(x)+3*x*sec(x)^2;
tolerance = 0.05; % your value?
perturbation = 1e-2;
converged = 1;
x = 3.5;
f_x = f(x);
% Use Newton s method to find the root
count = 0;
err = 10*tolerance; % something bigger than tolerance to start
while (err >= tolerance)
count = count + 1;
if (count > 1e3)
converged = 0;
disp('Did not converge.');
break;
end
x0 = x;
dfdx_x = dfdx(x);
if (dfdx_x ~= 0)
% Avoid division by zero
f_x = f(x);
x = x - f_x/dfdx_x;
else
% Perturb x and go back to top of while loop
x = x + perturbation;
continue;
end
err = (abs((x - x0)/x))*100;
end
if (converged)
disp(['Converged to ' num2str(x,'%10.8e') ' in ' num2str(count) ...
' iterations.']);
end
I am trying to convert a Matlab fmincon optimisation function to julia.
but without much luck.
I think NLopt or IPopt can be used.
the default examples work but when i try to change the values it doesnt seem to iterate.
using NLopt
count = 0 # keep track of # function evaluations
function myfunc(x::Vector, grad::Vector)
if length(grad) > 0
grad[1] = 0
grad[2] = 0.5/sqrt(x[2])
end
global count
count::Int += 1
println("f_$count($x)")
sqrt(x[2])
end
function myconstraint(x::Vector, grad::Vector, a, b)
if length(grad) > 0
grad[1] = 3a * (a*x[1] + b)^2
grad[2] = -1
end
(a*x[1] + b)^3 - x[2]
end
opt = Opt(:LD_MMA, 2)
lower_bounds!(opt, [-Inf, 0.])
xtol_rel!(opt,1e-4)
min_objective!(opt, myfunc)
inequality_constraint!(opt, (x,g) -> myconstraint(x,g,2,0), 1e-8)
inequality_constraint!(opt, (x,g) -> myconstraint(x,g,-1,1), 1e-8)
(minf,minx,ret) = optimize(opt, [1.234, 5.678])
println("got $minf at $minx after $count iterations (returned $ret)")
This one works fine. it gives 2 results and 11 iterations
using NLopt
count = 0 # keep track of # function evaluations
function myfunc(x)
x[1]^2 + 5*x[2] - 3
end
function myconstraint(x)
100*x[1] + 2000*x[2] == 100
end
opt = Opt(:LD_MMA, 2)
lower_bounds!(opt, [0, 0])
upper_bounds!(opt,[10,5])
xtol_rel!(opt,1e-10)
min_objective!(opt, myfunc);
inequality_constraint!(opt, (x) -> myconstraint(x), 1e-10)
(minf,minx,ret) = optimize(opt, [0.1,0.1])
This doesnt work, It just gives a result and 0 iterations
using NLopt
count = 0 # keep track of # function evaluations
function myfunc(x::Vector, grad::Vector)
x[1]^2 + 5*x[2] - 3
end
function myconstraint(result::Vector, x::Vector, grad::Vector)
100*x[1] + 2000*x[2] == 100
end
opt = Opt(:LD_MMA, 2)
lower_bounds!(opt, [0., 0.])
upper_bounds!(opt, [10.,5.])
xtol_rel!(opt,1e-4)
min_objective!(opt, myfunc)
inequality_constraint!(opt, (x,g) -> myconstraint(x), 1e-8)
(minf,minx,ret) = optimize(opt, [0., 0.])#even with [5.,10.]
this just gives me a result and 0 iterations.
anyone any idea what i a doing wrong?
I'm having a tough time reconstructing what you've done based on the comment marathon, so instead I thought I would just provide code indicating how I would have solved this problem. Note, the following code assumes your inequality constraint is 100*x[1] + 2000*x[2] <= 100:
using NLopt
function myfunc(x::Vector{Float64}, grad::Vector{Float64})
if length(grad) > 0
grad[1] = 2 * x[1]
grad[2] = 5
end
xOut = x[1]^2 + 5*x[2] - 3
println("Obj func = " * string(xOut))
return(xOut)
end
function myconstraint(x::Vector{Float64}, grad::Vector{Float64})
if length(grad) > 0
grad[1] = 100
grad[2] = 2000
end
xOut = 100*x[1] + 2000*x[2] - 100
println("Constraint val = " * string(xOut))
return(xOut)
end
opt = Opt(:LD_MMA, 2)
lower_bounds!(opt, [0, 0])
upper_bounds!(opt,[10,5])
xtol_rel!(opt,1e-10)
min_objective!(opt, myfunc);
inequality_constraint!(opt, myconstraint, 1e-10)
(minf,minx,ret) = optimize(opt, [0.1,0.1])
There are three main differences between my code and yours:
I've explicitly included the gradient function updates. This will make the convergence routine more efficient for any gradient-based algorithm (of which MMA is one).
I'm printing out the values of my objective function and constraint function at each step. This is useful for debugging purposes, and to get some idea of what is happening under the hood.
I've made it very clear in my code that the return value from my constraint function is f(x), where the constraint is defined by the equation f(x) <= 0. This is the appropriate syntax for use with NLopt.
You stated in the comments that you are getting an error when you try to change inequality_constraint! to equality_constraint!. This is because your algorithm of choice (MMA) only supports inequality constraints. A description of the NLopt algorithms can be found here. Note:
only MMA and SLSQP support arbitrary nonlinear inequality constraints, and only SLSQP supports nonlinear equality constraints
So, switch your algorithm to :LD_SLSQP, and voila, you can now switch your inequality constraint to an equality constraint.
I understand that there is an error with dimensions in line dr=(r-v*v/2)*dT . But I have little knowledge of Matlab. Help to fix it, please. The code is small and simple. Maybe someone will find time to look
function [optionPrice] = upAndOutCallOption(S,r,v,x,b,T,dT)
t = 0;
dr=[];
pert=[];
while (t < T) & (S < b)
t = t + dT;
dr = (r - v.*v./2).*dT;
pert = v.*sqrt( dT ).*randn();
S = S.*exp(dr + pert);
end
if S<b
% Within barrier, so price as for a European option.
optionPrice = exp(-r.*T).* max(0, S - x);
else
% Hit the barrier, so the option is withdrawn.
optionPrice = 0;
end
end
Call from another function of this kind:
for k=1:amountOfOptions
[optionPrices(k)] = upAndOutCallOption(stockPrice(k)*o,riskFreeRate(k)*o,... volatility(k)*o, strike(k)*o, barrier(k)*o, timeToExpiry(k)*o, sampleRate(k)*o);
result(k) = mean(optionPrices(k));
end
Therefore, any difficulties.
It's good that you know the problem is within dr = (r - v.*v./2).*dT;. The command itself has many possible problems which also related to dimensions:
Here you are doing element-wise multiplication (because of the .*) with matrices, which requires (in the case of your command) that r has the same number of rows AND columns as v (since because of element-wise, v.*v/2 has the same size as v).
Moreover, it is unnecessary to do element-wise division with scalar number, that means there is no need to have ./2 in Matlab.
And, finally, since it's element-wise multiplication again, the matrix (r - v.*v./2) must also have the same number of rows and columns as matrix dT.
Check here for more information about Matlab's matrix operations.
I need help finding an integral of a function using trapezoidal sums.
The program should take successive trapezoidal sums with n = 1, 2, 3, ...
subintervals until there are two neighouring values of n that differ by less than a given tolerance. I want at least one FOR loop within a WHILE loop and I don't want to use the trapz function. The program takes four inputs:
f: A function handle for a function of x.
a: A real number.
b: A real number larger than a.
tolerance: A real number that is positive and very small
The problem I have is trying to implement the formula for trapezoidal sums which is
Δx/2[y0 + 2y1 + 2y2 + … + 2yn-1 + yn]
Here is my code, and the area I'm stuck in is the "sum" part within the FOR loop. I'm trying to sum up 2y2 + 2y3....2yn-1 since I already accounted for 2y1. I get an answer, but it isn't as accurate as it should be. For example, I get 6.071717974723753 instead of 6.101605982576467.
Thanks for any help!
function t=trapintegral(f,a,b,tol)
format compact; format long;
syms x;
oldtrap = ((b-a)/2)*(f(a)+f(b));
n = 2;
h = (b-a)/n;
newtrap = (h/2)*(f(a)+(2*f(a+h))+f(b));
while (abs(newtrap-oldtrap)>=tol)
oldtrap = newtrap;
for i=[3:n]
dx = (b-a)/n;
trapezoidsum = (dx/2)*(f(x) + (2*sum(f(a+(3:n-1))))+f(b));
newtrap = trapezoidsum;
end
end
t = newtrap;
end
The reason why this code isn't working is because there are two slight errors in your summation for the trapezoidal rule. What I am precisely referring to is this statement:
trapezoidsum = (dx/2)*(f(x) + (2*sum(f(a+(3:n-1))))+f(b));
Recall the equation for the trapezoidal integration rule:
Source: Wikipedia
For the first error, f(x) should be f(a) as you are including the starting point, and shouldn't be left as symbolic. In fact, you should simply get rid of the syms x statement as it is not useful in your script. a corresponds to x1 by consulting the above equation.
The next error is the second term. You actually need to multiply your index values (3:n-1) by dx. Also, this should actually go from (1:n-1) and I'll explain later. The equation above goes from 2 to N, but for our purposes, we are going to go from 1 to N-1 as you have your code set up like that.
Remember, in the trapezoidal rule, you are subdividing the finite interval into n pieces. The ith piece is defined as:
x_i = a + dx*i; ,
where i goes from 1 up to N-1. Note that this starts at 1 and not 3. The reason why is because the first piece is already taken into account by f(a), and we only count up to N-1 as piece N is accounted by f(b). For the equation, this goes from 2 to N and by modifying the code this way, this is precisely what we are doing in the end.
Therefore, your statement actually needs to be:
trapezoidsum = (dx/2)*(f(a) + (2*sum(f(a+dx*(1:n-1))))+f(b));
Try this and let me know if you get the right answer. FWIW, MATLAB already implements trapezoidal integration by doing trapz as #ADonda already pointed out. However, you need to properly structure what your x and y values are before you set this up. In other words, you would need to set up your dx before hand, then calculate your x points using the x_i equation that I specified above, then use these to generate your y values. You then use trapz to calculate the area. In other words:
dx = (b-a) / n;
x = a + dx*(0:n);
y = f(x);
trapezoidsum = trapz(x,y);
You can use the above code as a reference to see if you are implementing the trapezoidal rule correctly. Your implementation and using the above code should generate the same results. All you have to do is change the value of n, then run this code to generate the approximation of the area for different subdivisions underneath your curve.
Edit - August 17th, 2014
I figured out why your code isn't working. Here are the reasons why:
The for loop is unnecessary. Take a look at the for loop iteration. You have a loop going from i = [3:n] yet you don't reference the i variable at all in your loop. As such, you don't need this at all.
You are not computing successive intervals properly. What you need to do is when you compute the trapezoidal sum for the nth subinterval, you then increment this value of n, then compute the trapezoidal rule again. This value is not being incremented properly in your while loop, which is why your area is never improving.
You need to save the previous area inside the while loop, then when you compute the next area, that's when you determine whether or not the difference between the areas is less than the tolerance. We can also get rid of that code at the beginning that tries and compute the area for n = 2. That's not needed, as we can place this inside your while loop. As such, this is what your code should look like:
function t=trapintegral(f,a,b,tol)
format long; %// Got rid of format compact. Useless
%// n starts at 2 - Also removed syms x - Useless statement
n = 2;
newtrap = ((b-a)/2)*(f(a) + f(b)); %// Initialize
oldtrap = 0; %// Initialize to 0
while (abs(newtrap-oldtrap)>=tol)
oldtrap = newtrap; %//Save the old area from the previous iteration
dx = (b-a)/n; %//Compute width
%//Determine sum
trapezoidsum = (dx/2)*(f(a) + (2*sum(f(a+dx*(1:n-1))))+f(b));
newtrap = trapezoidsum; % //This is the new sum
n = n + 1; % //Go to the next value of n
end
t = newtrap;
end
By running your code, this is what I get:
trapezoidsum = trapintegral(#(x) (x+x.^2).^(1/3),1,4,0.00001)
trapezoidsum =
6.111776299189033
Caveat
Look at the way I defined your function. You must use element-by-element operations as the sum command inside the loop will be vectorized. Take a look at the ^ operations specifically. You need to prepend a dot to the operations. Once you do this, I get the right answer.
Edit #2 - August 18th, 2014
You said you want at least one for loop. This is highly inefficient, and whoever specified having one for loop in the code really doesn't know how MATLAB works. Nevertheless, you can use the for loop to accumulate the sum term. As such:
function t=trapintegral(f,a,b,tol)
format long; %// Got rid of format compact. Useless
%// n starts at 3 - Also removed syms x - Useless statement
n = 3;
%// Compute for n = 2 first, then proceed if we don't get a better
%// difference tolerance
newtrap = ((b-a)/2)*(f(a) + f(b)); %// Initialize
oldtrap = 0; %// Initialize to 0
while (abs(newtrap-oldtrap)>=tol)
oldtrap = newtrap; %//Save the old area from the previous iteration
dx = (b-a)/n; %//Compute width
%//Determine sum
%// Initialize
trapezoidsum = (dx/2)*(f(a) + f(b));
%// Accumulate sum terms
%// Note that we multiply each term by (dx/2), but because of the
%// factor of 2 for each of these terms, these cancel and we thus have dx
for n2 = 1 : n-1
trapezoidsum = trapezoidsum + dx*f(a + dx*n2);
end
newtrap = trapezoidsum; % //This is the new sum
n = n + 1; % //Go to the next value of n
end
t = newtrap;
end
Good luck!