Numerical Integral in MatLab using integral command - matlab

I am trying to compute the value of this integral using Matlab
Here the other parameters have been defined or computed in the earlier part of the program as follows
N = 2;
sigma = [0.01 0.1];
l = [15];
meu = 4*pi*10^(-7);
f = logspace ( 1, 6, 500);
w=2*pi.*f;
for j = 1 : length(f)
q2(j)= sqrt(sqrt(-1)*2*pi*f(j)*meu*sigma(2));
q1(j)= sqrt(sqrt(-1)*2*pi*f(j)*meu*sigma(1));
C2(j)= 1/(q2(j));
C1(j)= (q1(j)*C2(j) + tanh(q1(j)*l))/(q1(j)*(1+q1(j)*C2(j)*tanh(q1(j)*l)));
Z(j) = sqrt(-1)*2*pi*f(j)*C1(j);
Apprho(j) = meu*(1/(2*pi*f(j))*(abs(Z(j))^2));
Phi(j) = atan(imag(Z(j))/real(Z(j)));
end
%integration part
c1=w./(2*pi);
rho0=1;
fun = #(x) log(Apprho(x)/rho0)/(x.^2-w^2);
c2= integral(fun,0,Inf);
phin=pi/4-c1.*c2;
I am getting an error like this
could anyone help and tell me where i am going wrong.thanks in advance

Define Apprho in a separate *.m function file, instead of storing it in an array:
function [ result ] = Apprho(x)
%
% Calculate f and Z based on input argument x
%
% ...
%
meu = 4*pi*10^(-7);
result = meu*(1/(2*pi*f)*(abs(Z)^2));
end
How you calculate f and Z is up to you.
MATLAB's integral works by calling the function (in this case, Apprho) repeatedly at many different x values. The x values called by integral don't necessarily correspond to the 1: length(f) values used in your original code, which is why you received errors.

Related

SIR model using fsolve and Euler 3BDF

Hi i've been asked to solve SIR model using fsolve command in MATLAB, and Euler 3 point backward. I'm really confused on how to proceed, please help. This is what i have so far. I created a function for 3BDF scheme but i'm not sure how to proceed with fsolve and solve the system of nonlinear ODEs. The SIR model is shown as and 3BDF scheme is formulated as
clc
clear all
gamma=1/7;
beta=1/3;
ode1= #(R,S,I) -(beta*I*S)/(S+I+R);
ode2= #(R,S,I) (beta*I*S)/(S+I+R)-I*gamma;
ode3= #(I) gamma*I;
f(t,[S,I,R]) = [-(beta*I*S)/(S+I+R); (beta*I*S)/(S+I+R)-I*gamma; gamma*I];
R0=0;
I0=10;
S0=8e6;
odes={ode1;ode2;ode3}
fun = #root2d;
x0 = [0,0];
x = fsolve(fun,x0)
function [xs,yb] = ThreePointBDF(f,x0, xmax, h, y0)
% This function should return the numerical solution of y at x = xmax.
% (It should not return the entire time history of y.)
% TO BE COMPLETED
xs=x0:h:xmax;
y=zeros(1,length(xs));
y(1)=y0;
yb(1)=y0+f(x0,y0)*h;
for i=1:length(xs)-1
R =R0;
y1(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - R, y1(i-1,:)+2*h*F(i,:))
S = S0;
y2(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - S, y2(i-1,:)+2*h*F(i,:))
I= I0;
y3(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - I, y3(i-1,:)+2*h*F(i,:))
end
end
You have an implicit equation
y(i+1) - 2*h/3*f(t(i+1),y(i+1)) = G = (4*y(i) - y(i-1))/3
where the right-side term G is constant in the call to fsolve, that is, during the solution of the implicit step equation.
Note that this is for the vector valued system y'(t)=f(t,y(t)) where
f(t,[S,I,R]) = [-(beta*I*S)/(S+I+R); (beta*I*S)/(S+I+R)-I*gamma; gamma*I];
To solve this write
G = (4*y(i,:) - y(i-1,:))/3
y(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - G, y(i-1,:)+2*h*F(i,:))
where a midpoint step is used to get an order 2 approximation as initial guess, F(i,:)=f(t(i),y(i,:)). Add solver options for error tolerances as necessary, you want the error in the implicit equation smaller than the truncation error O(h^3) of the step. One can also keep only a short array of function values, then one has to be careful for the correspondence of the position in the short array to the time index.
Using all that and a reference solution by a higher order standard solver produces the following error graphs for the components
where one can see that the first order error of the constant first step results in a first order global error, while with a second order error in the first step using the Euler method results in a clear second order global error.
Implement the method in general terms
from scipy.optimize import fsolve
def BDF2(f,t,y0,y1):
N, h = len(t)-1, t[1]-t[0];
y = (N+1)*[np.asarray(y0)];
y[1] = y1;
for i in range(1,N):
t1, G = t[i+1], (4*y[i]-y[i-1])/3
y[i+1] = fsolve(lambda u: u-2*h/3*f(t1,u)-G, y[i-1]+2*h*f(t[i],y[i]), xtol=1e-3*h**3)
return np.vstack(y)
Set up the model to be solved
gamma=1/7;
beta=1/3;
print beta, gamma
y0 = np.array([8e6, 10, 0])
P = sum(y0); y0 = y0/P
def f(t,y): S,I,R = y; trns = beta*S*I/(S+I+R); recv=gamma*I; return np.array([-trns, trns-recv, recv])
Compute a reference solution and method solutions for the two initialization variants
from scipy.integrate import odeint
tg = np.linspace(0,120,25*128)
yg = odeint(f,y0,tg,atol=1e-12, rtol=1e-14, tfirst=True)
M = 16; # 8,4
t = tg[::M];
h = t[1]-t[0];
y1 = BDF2(f,t,y0,y0)
e1 = y1-yg[::M]
y2 = BDF2(f,t,y0,y0+h*f(0,y0))
e2 = y2-yg[::M]
Plot the errors, computation as above, but embedded in the plot commands, could be separated in principle by first computing a list of solutions
fig,ax = plt.subplots(3,2,figsize=(12,6))
for M in [16, 8, 4]:
t = tg[::M];
h = t[1]-t[0];
y = BDF2(f,t,y0,y0)
e = (y-yg[::M])
for k in range(3): ax[k,0].plot(t,e[:,k],'-o', ms=1, lw=0.5, label = "h=%.3f"%h)
y = BDF2(f,t,y0,y0+h*f(0,y0))
e = (y-yg[::M])
for k in range(3): ax[k,1].plot(t,e[:,k],'-o', ms=1, lw=0.5, label = "h=%.3f"%h)
for k in range(3):
for j in range(2): ax[k,j].set_ylabel(["$e_S$","$e_I$","$e_R$"][k]); ax[k,j].legend(); ax[k,j].grid()
ax[0,0].set_title("Errors: first step constant");
ax[0,1].set_title("Errors: first step Euler")

Is there any way to define a variable from a formula depending on what variables are given?

Imagine you have the following formula:
a=4*b*c^2
Is there any way in Matlab to program this is in a way that the if 2 of 3 variables are provided, Matlab will solve and provide the missing one?
Because the only alternative I am seeing is using switch-case and solving the equation myself.
if isempty(a)
switchVar=1
elseif isempty(b)
switchVar=2;
else
switchVar=3;
end
switch switchVar
case 1
a=4*b*c^2;
case 2
b=a/4/c^2;
case 3
c=sqrt(a/4/b);
end
Thanky you very much in advance!
For a numeric (rather than symbolic) solution...
You can do this with some faffing around and anonymous functions. See the comments for details:
% For the target function: 0 = 4*b*c - a
% Let x = [a,b,c]
% Define the values we know about, i.e. not "c"
% You could put any values in for the known variables, and NaN for the unknown.
x0 = [5, 10, NaN];
% Define an index for the unknown, and then clear any NaNs
idx = isnan(x0);
x0(idx) = 0;
% Make sure we have 1 unknown
assert( nnz( idx ) == 1 );
% Define a function which handles which elements of "x"
% actually influence the equation
X = #(x,ii) ( x*idx(ii) + x0(ii)*(~idx(ii)) );
% Define the function to solve, 0 = 4*b*c - a = 4*x(2)*x(3)^2 - x(1) = ...
f = #(x) 4 * X(x,2) * X(x,3).^2 - X(x,1);
% Solve using fzero. Note this requires an initial guess
x0(idx) = fzero( f, 1 );
We can check these results are correct by plotting the function for a range of c values, and checking the intersection with the x-axis aligns to the output x0(3):
c = -1:0.01:1;
y = 4*x(2)*c.^2-x(1);
figure
hold on
plot(t,y);
plot(t,y.*0,'r');
plot(x0(3),0,'ok','markersize',10,'linewidth',2)
Note that there were 2 valid solutions, since this is a quadratic. The initial condition provided to fzero will largely dictate which solution is found.
Edit:
You can condense this down a bit with some tweaks to my earlier syntax:
% Define all initial conditions. This includes known variable values
% i.e. a = 5, b = 10
% As well as the initial guess for unknown variable values
% i.e. c = 1 (maybe? ish?)
x0 = [5, 10, 1];
% Specify the index of the unknown variable in x
idx = 3;
% Define the helper function which handles the influence of each variable
X = #(x,ii) x*(ii==idx) + x0(ii)*(ii~=idx);
% Define the function to solve, as before
f = #(x) 4 * X(x,2) * X(x,3).^2 - X(x,1);
% Solve
x0(idx) = fzero( f, x0(idx) )
This approach has the benefit that you can just change idx (and re-run the definition steps for X and f) to switch the variable of choice!
First, specify the given known variables
Then rewrite the equation as 0 = 4*b*c - a
Finally use solve to find the missing value
Code is as follows
syms a b c
% define known variable
a = 2; c = 5;
% equation rewritten
f = 4*b*c^2 - a == 0;
missing_value = solve(f);

Why am I getting the wrong sign on my cos(x) approximation?

I am writing a Matlab script that will approximate sin(x) and cos(x) using their Maclaurin polynomials.
When I input
arg = (5*pi)/4 I expect to get the correct approximations for
sin((5*pi)/4) = -0.7071067811865474617
cos((5*pi)/4) = -0.7071067811865476838.
Instead I get the following when running the script:
Approximation of sin(3.92699) >> -0.7071067811865474617
Actual sin(3.92699) = -0.7071067811865474617
Error approximately = 0.0000000000000000000 (0)
----------------------------------------------------------
Approximation of cos(3.92699) >> 0.7071067811865474617
Actual cos(3.92699) = -0.7071067811865476838
Error approximately = 0.0000000000000001110 (1.1102e-16)
I am getting the correct answers for sin but incorrect for cosine when the argument (angle) is in quadrant 3 or 4. The problem is that I am getting the wrong sign on the cos(arg) value. Where have I messed up?
CalculatorForSineCosine.m
% Argument for sine/cosine in radians.
arg = (5*pi)/4;
% Move the argument x so it's within [0, pi/2].
newArg = moveArgumentV2(arg);
% Calculate what degree we need for our Taylorpolynomial.
TOL = 0; % If 0, assume we want Machine Epsilon.
r = findDegreeV2(TOL);
% Plot nth degree Taylorpolynomial around x = 0 for sine.
% and calculate approximation of sin(x).
[approximatedSin, errorSin] = sin_taylorV2(r, newArg);
eS = num2str(errorSin); % errorSin in string format
% Plot nth degree Taylorpolynomial around x = 0 for cosine.
% and calculate approximation of cos(x).
[approximatedCos, errorCos] = cos_taylorV2(r, newArg);
eC = num2str(errorCos); % errorCos in string format
% Print out the result.
fprintf('\nApproximation of sin(%.5f)\t >> %.19f\n', arg, approximatedSin);
fprintf('Actual sin(%.5f)\t\t\t\t = %.19f\n', arg, sin(arg));
fprintf('Error approximately\t\t\t\t = %.19f (%s)\n', errorSin, eS);
disp("----------------------------------------------------------")
fprintf('Approximation of cos(%.5f)\t >> %.19f\n', arg, approximatedCos);
fprintf('Actual cos(%.5f)\t\t\t\t = %.19f\n', arg, cos(arg));
fprintf('Error approximately\t\t\t\t = %.19f (%s)\n\n', errorCos, eC);
sin_taylorV2.m
function [approximatedSin, errorSin] = sin_taylorV2(r, x)
%% sss
% Q_2n+1(x) where 2n+1 = degree of polynomial.
n = (r - 1)/2;
% Approximate sin(x) using its Taylorpolynomial.
approximatedSin = 0;
for k = 0:n
approximatedSin = approximatedSin + (((-1).^k) .* (x.^(2.*k+1)))./(factorial(2.*k+1));
end
% Calculate the error.
errorSin = abs(sin(x) - approximatedSin);
end
cos_taylorV2.m
function [approximatedCos, errorCos] = cos_taylorV2(r, x)
%% sss
% Q_2n+1(x) where 2n+1 = degree of polynomial and n = # terms.
n = (r - 1)/2;
% Approximate cos(x) using its Taylorpolynomial.
approximatedCos = 0;
for k = 0:n
approximatedCos = approximatedCos + (((-1).^k) .* (x.^(2.*k)))./(factorial(2.*k));
end
% Calculate the error.
errorCos = abs(cos(x) - approximatedCos);
end
moveArgumentV2.m
function newArg = moveArgumentV2(arg)
%% Moves the argument x to the interval [0, pi/2].
% Make use of sines periodocity and choose n as ceil( (x-pi)/2pi) )
n = ceil((arg-pi)/(2*pi));
x1 = arg - 2*pi*n; % New angle will be in [-pi, pi]
x2 = abs(x1); % Angle will be in [0, pi]
if (x2 < pi/2) && (x2 > 0)
x3 = x2;
else
x3 = pi - x2;
end
newArg = x3*sign(x1); % Angle will be in [0, pi/2]
end
I would like to notice two things in your code.
First, you don't need the moveArgumentV2(arg) function, as, if you remember, the radius of convergence for the Maclaurin/Taylor series of the sin(x)/cos(x) is the set of all real numbers. That means the series should converge for any real x, disregarding the round-off errors inherently to every arithmetic operations done in a computer.
As a matter of fact, following your code, we can write a function that approximates the cos as:
function y = mycos(x,n)
y = 0;
for k=0:n
term = (-1)^k*x.^(2*k)/factorial(2*k);
y = y + term;
end
end
Notice this function works for values outside the range [-pi,pi]:
x = -10*pi:0.1:10*pi;
ye = cos(x) % exact value
ya = mycos(x,100) % approximated value
plot(x,ye,x,ya,'o')
The values returned by the mycos function are close to the exact value given by the cos built-in function. This happens because I calculated the approximation with the first 100 terms. The error, however, for higher values of x, is extremely large if we use just a few terms.
ya = mycos(x,10) % approximated value with 10 terms only
plot(x,ye-ya); title('error')
The problem now is that we can't just increase the number of terms without running in another problem.
If we increase the number of points, the mycos function crumbles due to round-off errors, because of the factorial function that overflows. A good idea is to try to change your code in order to avoid the use of the factorial function. Notice the recurrence between sucessive terms in the Maclaurin expansion of the cos function, and you can create another function without the use of the factorial:
function y = mycos2(x,n)
term = 1;
y = 1;
for k=1:n
term = -term.*x.^2/(2*k-1)/(2*k);
y = y + term;
end
end
Here, we calculate each term in the series expansion from the previous calculated term. We avoid the calculation of the factorial and make use of what we already have. This speeds the code and avoids overflow. As a matter of fact, if we now calculate the cos approximation with 500 terms, we get:
x = -10*pi:0.5:10*pi;
ye = cos(x); % exact value
ya = mycos(x,500); % approximated value
ya2 = mycos2(x,500); % approximated value
plot(x,ye,x,ya,'x',x,ya2,'s')
legend('ye','ya','ya2')
Notice in this figure the x marks are the calculations done with the mycos function, while the o marks are done without using the factorial function. The first function crumbles for values outside the range [-2,2], but the second one runs just fine. It works even when I use 1e5 terms. Increasing the number of terms reduces the errors, so you can estimate how much terms you will use on an approximation, given a desired tolerance. If this number is greater than 170, the first function will not work properly.
factorial(170) returns 7.2574e+306, but factorial(171) returns Inf, so any value that should be calculated with more than 170 terms will have problems in the first function. Avoid the calculation of factorial at all costs.
This is what I tried:
x = -3*pi:0.01:3*pi;
y = x;
for ii=1:numel(y)
y(ii) = moveArgumentV2(y(ii)); % not vectorized
end
plot(sin(x))
hold on
plot(sin(y))
Both sin(x) and sin(y) produce the same plot. But:
plot(cos(x))
hold on
plot(cos(y))
Now we see that cos(x) and cos(y) are not the same! This is because moveArgumentV2 changes the angle to be in the first and fourth quadrant (in the range [-pi/2, pi/2]), which is what you need for the sin function, but is not adequate for the cos function.
I would modify sin_taylorV2 and cos_taylorV2 to call moveArgumentV2, so you don't rely on the caller to know what the valid input range is. In cos_taylorV2 you would need to call it this way:
x = moveArgumentV2(x+pi/2) - pi/2;
and in sin_taylorV2 you'd call it the same way you do now.
Or, better, write cos_taylorV2 in terms of sin_taylorV2, which we know to be correct. This avoids code duplication.

Matlab function for lorentzian fit with global variables

I want to fit a Lorentzian to my data, so first I want to test my fitting procedure to simulated data:
X = linspace(0,100,200);
Y = 20./((X-30).^2+20)+0.08*randn(size(X));
starting parameters
a3 = ((max(X)-min(X))/10)^2;
a2 = (max(X)+min(X))/2;
a1 = max(Y)*a3;
a0 = [a1,a2,a3];
find minimum for fit
afinal = fminsearch(#devsum,a0);
afinal is vector with parameters for my fit. If I test my function as follows
d= devsum(a0)
then d= 0, but if I do exactly what's in my function
a=a0;
d = sum((Y - a(1)./((X-a(2)).^2+a(3))).^2)
then d is not equal to zero. How is this possible? My function is super simple so I don't know what's going wrong.
my function:
%devsum.m
function d = devsum(a)
global X Y
d = sum((Y - a(1)./((X-a(2)).^2+a(3))).^2);
end
Basically I'm just implementing stuff I found here
http://www.home.uni-osnabrueck.de/kbetzler/notes/fitp.pdf
page 7
It is usually better to avoid using global variables. The way I usually solve these problems is to first define a function which evaluates the curve you want to fit as a function of x and the parameters:
% lorentz.m
function y = lorentz(param, x)
y = param(1) ./ ((x-param(2)).^2 + param(3))
In this way, you can reuse the function later for plotting the result of the fit.
Then, you define a small anonymous function with the property you want to minimize, with only a single parameter as input, since that is the format that fminsearch needs. Instead of using global variables, the measured X and Y are 'captured' (technical term is doing a closure over these variables) in the definition of the anonymous function:
fit_error = #(param) sum((y_meas - lorentz(param, x_meas)).^2)
And finally you fit your parameters by minimizing the error with fminsearch:
fitted_param = fminsearch(fit_error, starting_param);
Quick demonstration:
% simulate some data
X = linspace(0,100,200);
Y = 20./((X-30).^2+20)+0.08*randn(size(X));
% rough guess of initial parameters
a3 = ((max(X)-min(X))/10)^2;
a2 = (max(X)+min(X))/2;
a1 = max(Y)*a3;
a0 = [a1,a2,a3];
% define lorentz inline, instead of in a separate file
lorentz = #(param, x) param(1) ./ ((x-param(2)).^2 + param(3));
% define objective function, this captures X and Y
fit_error = #(param) sum((Y - lorentz(param, X)).^2);
% do the fit
a_fit = fminsearch(fit_error, a0);
% quick plot
x_grid = linspace(min(X), max(X), 1000); % fine grid for interpolation
plot(X, Y, '.', x_grid, lorentz(a_fit, x_grid), 'r')
legend('Measurement', 'Fit')
title(sprintf('a1_fit = %g, a2_fit = %g, a3_fit = %g', ...
a_fit(1), a_fit(2), a_fit(3)), 'interpreter', 'none')
Result:

MATLAB Discretizing Sine Function with +/-

Hello I am relatively new to MATLAB and have received and assignment in which we could use any programming language. I would like to continue MATLAB and have decided to use it for this assignment. The questions has to do with the following formula:
x(t) = A[1+a1*E(t)]*sin{w[1+a2*E(t)]*t+y}(+/-)a3*E(t)
The first question we have is to develop an appropriate discretization of x(t) with a time step h. I think i understand how to do this using step but because there is a +/- in the end I am running into errors. Here is what I have (I have simplified the equation by assigning arbitrary values to each variable):
A = 1;
E = 1;
a1 = 1;
a2 = 2;
a3 = 3;
w = 1;
y = 0;
% ts = .1;
% t = 0:ts:10;
t = 1:1:10;
x1(t) = A*(1+a1*E)*sin(w*(1+a2*E)*t+y);
x2(t) = a3*E;
y(t) = [x1(t)+x2(t), x1(t)-x2(t)]
plot(y)
The problem is I keep getting the following error because of the +/-:
In an assignment A(I) = B, the number of elements in B and I must be the same.
Error in Try1 (line 21)
y(t) = [x1(t)+x2(t), x1(t)-x2(t)]
Any help?? Thanks!
You can remove the (t) from the left-hand side of all three assignments.
y = [x1+x2, x1-x2]
MATLAB knows what to do with vectors and matrices.
Or, if you want to write it out the long way, tell MATLAB there will be two columns:
y(t, 1:2) = [x1(t)'+x2(t)', x1(t)'-x2(t)']
or two rows:
y(1:2, t) = [x1(t)+x2(t); x1(t)-x2(t)]
But this won't work when you have fractional values of t. The value in parentheses is required to be the index, not a dependent variable. If you want the whole vector, just leave it out.