The problem in the link:
can be integrated analytically and the answer is 4, however I'm interested in integrating it numerically using Matlab, because it's similar in form to a problem that I can't integrate analytically. The difficulty in the numerical integration arises because the function in the two inner integrals is a function of x,y and z and z can't be factored out.
by no means, this is elegant. hope someone can make better use of matlab functions than me. i have tried the brute force way just to practice numerical integration. i have tried to avoid the pole in the inner integral at z=0 by exploiting the fact that it is also being multiplied by z. i get 3.9993. someone must get better solution by using something better than trapezoidal rule
function []=sofn
clear all
global x y z xx yy zz dx dy
dx=0.05;
x=0:dx:1;
dy=0.002;
dz=0.002;
y=0:dy:1;
z=0:dz:2;
xx=length(x);
yy=length(y);
zz=length(z);
s1=0;
for i=1:zz-1
s1=s1+0.5*dz*(z(i+1)*exp(inte1(z(i+1)))+z(i)*exp(inte1(z(i))));
end
s1
end
function s2=inte1(localz)
global y yy dy
if localz==0
s2=0;
else
s2=0;
for j=1:yy-1
s2=s2+0.5*dy*(inte2(y(j),localz)+inte2(y(j+1),localz));
end
end
end
function s3=inte2(localy,localz)
global x xx dx
s3=0;
for k=1:xx-1
s3=s3+0.5*dx*(2/(localy+localz));
end
end
Well, this is strange, because on the poster's similar previous question I claimed this can't be done, and now after having looked at Guddu's answer I realize its not that complicated. What I wrote before, that a numerical integration results in a number but not a function, is true – but beside the point: One can just define a function that evaluates the integral for every given parameter, and this way effectively one does have a function as a result of a numerical integration.
Anyways, here it goes:
function q = outer
f = #(z) (z .* exp(inner(z)));
q = quad(f, eps, 2);
end
function qs = inner(zs)
% compute \int_0^1 1 / (y + z) dy for given z
qs = nan(size(zs));
for i = 1 : numel(zs)
z = zs(i);
f = #(y) (1 ./ (y + z));
qs(i) = quad(f, 0 , 1);
end
end
I applied the simplification suggested by myself in a comment, eliminating x. The function inner calculates the value of the inner integral over y as a function of z. Then the function outer computes the outer integral over z. I avoid the pole at z = 0 by letting the integration run from eps instead of 0. The result is
4.00000013663955
inner has to be implemented using a for loop because a function given to quad needs to be able to return its value simultaneously for several argument values.
Related
I have data like this:
y = [0.001
0.0042222222
0.0074444444
0.0106666667
0.0138888889
0.0171111111
0.0203333333
0.0235555556
0.0267777778
0.03]
and
x = [3.52E-06
9.72E-05
0.0002822918
0.0004929136
0.0006759156
0.0008199029
0.0009092797
0.0009458332
0.0009749509
0.0009892005]
and I want y to be a function of x with y = a(0.01 − b*n^−cx).
What is the best and easiest computational approach to find the best combination of the coefficients a, b and c that fit to the data?
Can I use Octave?
Your function
y = a(0.01 − b*n−cx)
is in quite a specific form with 4 unknowns. In order to estimate your parameters from your list of observations I would recommend that you simplify it
y = β1 + β2β3x
This becomes our objective function and we can use ordinary least squares to solve for a good set of betas.
In default Matlab you could use fminsearch to find these β parameters (lets call it our parameter vector, β), and then you can use simple algebra to get back to your a, b, c and n (assuming you know either b or n upfront). In Octave I'm sure you can find an equivalent function, I would start by looking in here: http://octave.sourceforge.net/optim/index.html.
We're going to call fminsearch, but we need to somehow pass in your observations (i.e. x and y) and we will do that using anonymous functions, so like example 2 from the docs:
beta = fminsearch(#(x,y) objfun(x,y,beta), beta0) %// beta0 are your initial guesses for beta, e.g. [0,0,0] or [1,1,1]. You need to pick these to be somewhat close to the correct values.
And we define our objective function like this:
function sse = objfun(x, y, beta)
f = beta(1) + beta(2).^(beta(3).*x);
err = sum((y-f).^2); %// this is the sum of square errors, often called SSE and it is what we are trying to minimise!
end
So putting it all together:
y= [0.001; 0.0042222222; 0.0074444444; 0.0106666667; 0.0138888889; 0.0171111111; 0.0203333333; 0.0235555556; 0.0267777778; 0.03];
x= [3.52E-06; 9.72E-05; 0.0002822918; 0.0004929136; 0.0006759156; 0.0008199029; 0.0009092797; 0.0009458332; 0.0009749509; 0.0009892005];
beta0 = [0,0,0];
beta = fminsearch(#(x,y) objfun(x,y,beta), beta0)
Now it's your job to solve for a, b and c in terms of beta(1), beta(2) and beta(3) which you can do on paper.
I have an equation of the form c = integral of f(t)dt limiting from a constant to a variable (I don't want to show the full equation because it is very long and complex). Is there any way to calculate in MATLAB what the value of that variable is (there are no other variables and the equation is too difficult to solve by hand)?
Assume your limit is from cons to t and g(t) as your function with variable t. Now,
syms t
f(t) = int(g(t),t);
This will give you the indefinite integral. Now f(t) will be
f(t) = f(t)+f(cons);
You have the value of f(t)=c. So just solve the equation
S = solve(f(t)==c,t,'Real',true);
eval(S) will give the answer i think
This is an extremely unclear question - if you do not want to post the full equation, post an example instead
I am assuming this is what you intend: you have an integrand f(x), which you know, and has been integrated to give some constant c which you know, over the limits of x = 0, to x = y, for example, where y may change, and you desire to find y
My advice would be to integrate f(x) manually, fill in the first limit, and subtract that portion from c. Next you could employ some technique such as the Newton-Ralphson method to iteratively search for the root to your equation, which should be in x only
You could use a function handle and the quad function for the integral
myFunc = #(t) exp(t*3); % or whatever
t0 = 0;
t1 = 3;
L = 50;
f = #(b) quad(#(t) myFunc(t,b),t0,t1);
bsolve = fzero(f,2);
Hope it help !
i need the function to output a P nth degree taylor polynomial centered at a
here is the code i have , it works almost perfect but im not sure how to get the derivatives of the functions at "a" with out compromising my (x-a) terms...
from what i see the diff(f,k) computes the kth derivative of f, but can not plug
in the a. looks my code will require another matlab function that will do the plugging any suggestions?
function [P] = mytaylor(f,a,n)
f = sym(f);
syms x;
terms = 0;
for k = 0:n
fk = diff(f,k);
terms = terms + fk*(x-a)^(k)/factorial(k);
end
P = terms;
end
You need to use the subs function to evaluate your symbolic expression fk at a.
We have an equation similar to the Fredholm integral equation of second kind.
To solve this equation we have been given an iterative solution that is guaranteed to converge for our specific equation. Now our only problem consists in implementing this iterative prodedure in MATLAB.
For now, the problematic part of our code looks like this:
function delta = delta(x,a,P,H,E,c,c0,w)
delt = #(x)delta_a(x,a,P,H,E,c0,w);
for i=1:500
delt = #(x)delt(x) - 1/E.*integral(#(xi)((c(1)-c(2)*delt(xi))*ms(xi,x,a,P,H,w)),0,a-0.001);
end
delta=delt;
end
delta_a is a function of x, and represent the initial value of the iteration. ms is a function of x and xi.
As you might see we want delt to depend on both x (before the integral) and xi (inside of the integral) in the iteration. Unfortunately this way of writing the code (with the function handle) does not give us a numerical value, as we wish. We can't either write delt as two different functions, one of x and one of xi, since xi is not defined (until integral defines it). So, how can we make sure that delt depends on xi inside of the integral, and still get a numerical value out of the iteration?
Do any of you have any suggestions to how we might solve this?
Using numerical integration
Explanation of the input parameters: x is a vector of numerical values, all the rest are constants. A problem with my code is that the input parameter x is not being used (I guess this means that x is being treated as a symbol).
It looks like you can do a nesting of anonymous functions in MATLAB:
f =
#(x)2*x
>> ff = #(x) f(f(x))
ff =
#(x)f(f(x))
>> ff(2)
ans =
8
>> f = ff;
>> f(2)
ans =
8
Also it is possible to rebind the pointers to the functions.
Thus, you can set up your iteration like
delta_old = #(x) delta_a(x)
for i=1:500
delta_new = #(x) delta_old(x) - integral(#(xi),delta_old(xi))
delta_old = delta_new
end
plus the inclusion of your parameters...
You may want to consider to solve a discretized version of your problem.
Let K be the matrix which discretizes your Fredholm kernel k(t,s), e.g.
K(i,j) = int_a^b K(x_i, s) l_j(s) ds
where l_j(s) is, for instance, the j-th lagrange interpolant associated to the interpolation nodes (x_i) = x_1,x_2,...,x_n.
Then, solving your Picard iterations is as simple as doing
phi_n+1 = f + K*phi_n
i.e.
for i = 1:N
phi = f + K*phi
end
where phi_n and f are the nodal values of phi and f on the (x_i).
I want to evaluate a double integral of the form
$$\int_{-\infty}^a \int_{-\infty}^b \sum_{i,j}^K a_ia_jx^iy^j\exp(-x^2 - y^2 + xy)dx dy $$
where $a_i$ and $a_j$ are constants. Since the integral is linear, I can interchange summation and integration, but in this case I have to evaluate $K^2$ integrals and it takes too long. In that case I do the following:
for i = 1:K
for j = 1:K
fun = #(x,y) x.^i.*y.^j.*exp(-2.*(x.^2 + y.^2 - 2.*x.*y))
part(i,j) = alpha(i)*alpha(j)*integral2(fun,-inf,a,-inf,b)
end
end
It takes too long, so I want to evaluate only one integral, but I don't know how to vectorize $\sum_{i,j}^K a_ia_jx^iy^j\exp(-x^2 - y^2 + xy)$, namely, how to supply it to integral2. I would be very grateful for any help.
It looks like you'll need to have i and j be third and fourth dimensions, in order for there to be a chance that the code will work.
I also don't have integral2 (I use octave, integral2 is a new matlab function that octave doesn't yet have), so I can't test it, but I'd think something like this might work:
alphaset=zeros(1,1,K,K);
alphaset(1,1,1:K,1:K)=alpha(1:K)'*alpha(1:K);
i_set=zeros(1,1,K,1);
j_set=zeros(1,1,1,K);
i_set(:)=1:K;
j_set(:)=1:K;
fun=#(x,y) x.^i_set.*y.^j_set.*exp(-2.*(x.^2 + y.^2 - 2.*x.*y));
part = squeeze(alphaset.*integral2(fun,-inf,a,-inf,b));
As I said, I can't promise that it'll work, because I don't know how integral2 works. But if you replace the integral2 with simply "sum(sum(fun([1,2,4],[3,-1,2])))", then it works as intended for that operation (that is, it sums over the x and y values, and the result is a matrix over the set of indices).
If you just want to improve speed, you may try parfor.
Let $X=(x,x^2,\cdots,x^K)$, $Y=(y,y^2,\cdots,y^K)$, $A=(a_{ij})$ be a matrix with $a_{ij}=a_{i}a_{j}$, then
$$\sum_{i,j}^K a_{i}a_{j}x^iy^j=XAY^{T}$$
I don't have integral2 function on my matlab, so I didn't test if it will improve the speed a lot.
Also, I think you need to use syms x and y, after you compute the $$XAY^{T}$$, then use matlabFunction to convert symbolic expression to function handle. Here it is my test code: syms x y; X=[x,x^2]; Y=[y,y^2]; Z=X*Y'; fun =matlabFunction(Z); ff=#(x,y) x^2+y^2; gg=fun(x,y).*ff(x,y);