Built a loop for a definite integral - matlab

I am trying to build a loop for a definite integral from 0 to y with step 0.1 for y=0 to y=20 and get a value for each loop iteration (varying y) in a diagram.
int(2.906663106*x*(1/(1+1.38*x^4))^.4311594203 - 3.458929096*x^5/((1/(1+1.38*x^4))^.5688405797*(1+1.38*x^4)^2))

You can use the integral function of Matlab and run this inside a for loop
val = zeros(1,201);
y_step = 0.1;
y_max = 20;
count = 1;
for yy = 0:y_step:y_max
fun = #(x) (2.906663106 .* x .* (1 ./ (1 + 1.38 .* x.^4)).^0.4311594203 - 3.458929096 .* x.^5 ./ ((1 ./ (1 + 1.38 .* x.^4)).^0.5688405797 .* (1 + 1.38 .* x.^4).^2));
intgrl = integral(fun, 0, yy);
val(count) = intgrl;
count = count + 1;
end
figure
plot(val)
You will have each value of the integral in the range [0, yy] during the loop saved in val.
edit: Edited answer due to more detailed question in comment.

Related

Why does the Integral() function claim there is a singularity when there obviously isn't?

I am attempting to write a function which expands another into a Fourier series. However for some functions the integral() function keeps spitting out warnings claiming it has reached minimum step size which is likely due to a singularity at x = -1. My code is as follows:
H = #(t) 1 * (t >= 0) + 0; % Heaviside step function
x_a = #(t) 2*(H(mod(t+1, 4)) - H(mod(t+1, 4) - 2)) - 1;
time = linspace(-8, 8, 25);
plot(time, x_a(time))
ylim([-1.5 1.5])
xlim([-8 8])
% This is where it starts spitting out warnings if the next line is uncommented
%x_a_fourier = fourier(x_a, time, 4, 10);
function x = fourier(F, I, T, m)
a_0 = (1/T) * integral(#(x) F(x), -T/2, T/2);
x = a_0 * ones(1, length(I));
w_0 = (2*pi) / T;
a_n = #(n) (2/T) * integral(#(x) F(x) .* cos(n*w_0*x), -T/2, T/2);
b_n = #(n) (2/T) * integral(#(x) F(x) .* sin(n*w_0*x), -T/2, T/2);
for k = 1:length(I)
for l = 1:m
x(k) = x(k) + a_n(l) * cos(l*w_0*I(k)) + b_n(l) * sin(l*w_0*I(k));
end
end
end
From the plot() statement it should be obvious that the integral() function shouldn't run into any singularities. Any ideas as to what may be the problem?

Speeding up code which integrates a 2D gpuArray matrix using Simpson's Rule

I have a (real) 2D gpuArray, which I am using as part of a larger code, and now am trying to also integrate the array using the Composite Simpson Rule inside my main loop (several 10000 iterations at least). A MWE looks like the following:
%%%%%%%%%%%%%%%%%% MAIN CODE %%%%%%%%%%%%%%%%%%
Ny = 501; % Dimensions of matrix M
Nx = 503; %
dx = 0.1; % Grid spacings
dy = 0.2; %
M = rand(Ny, Nx, 'gpuArray'); % Initialise a matrix
for k = 1:10000
% M = function1(M) % Apply some other functions to M
% ... etc ...
I = simpsons_integration_2D(M, dx, dy, Nx, Ny); % Now integrate M
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%% Integrator %%%%%%%%%%%%%%%%%
function I = simpsons_integration_2D(F, dx, dy, Nx, Ny)
% Integrate the 2D function F with Nx columns and Ny rows, and grid spacings
% dx and dy using Simpson's rule.
% Integrate along x direction (vertically) --> IX is a vector afterwards
sX = sum( F(:,1:2:Nx-2) + 4*F(:,2:2:(Nx-1)) + F(:,3:2:Nx) , 2);
IX = dx/3 * sX;
% Integrate along y direction --> I is a scalar afterwards
sY = sum( IX(1:2:Ny-2) + 4*IX(2:2:(Ny-1)) + IX(3:2:Ny) , 1);
I = dy/3 * sY;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
The operation of performing the integration is around 850 µs, which is currently a significant part of my code. This was measured using
f = #() simpsons_integration_2D(M, dx, dy, Nx, Ny);
t = gputimeit(f)
Is there a way to reduce the execution time for integrating the gpuArray matrix?
(The graphics card is the Nvidia Quadro P4000)
Many thanks
Assuming that the matrix has odd dimensions here is a way to optimize the function:
function I = simpsons_integration_2D(F, dx, dy, Nx, Ny)
sX = 2 * sum(F,2) + 2 * sum (F(:,2:2:(Nx-1)),2) - F(:,1) - F(:,Nx);
sY = dx/3 * (2 * sum(sX) + 2 * sum (sX(2:2:(Ny-1))) - sX(1) - sX(Ny));
I = dy/3 * sY;
end
EDIT
A more optimized solution using matrix multiplication:
function I = simpsons_integration_2D2(F, dx, dy, Nx, Ny)
mx = repmat (2, Nx, 1);
mx(2:2:(Nx-1)) = 4;
mx(1) = 1;
mx(Nx) = 1;
my = repmat (2, 1, Ny);
my(2:2:(Ny-1)) = 4;
my(1) = 1;
my(Ny) = 1;
I = (dx*dy/9) * (my * (F * mx));
end
If Nx and Ny are the same you need to compute only one of them mx or my:
function I = simpsons_integration_2D2(F, dx, dy, Nx, Ny)
mx = repmat (2, Nx, 1);
mx(2:2:(Nx-1)) = 4;
mx(1) = 1;
mx(Nx) = 1;
I = (dx*dy/9) * (mx.' * (F * mx));
end
If Nx and Ny are constant you can precompute mx outside the function and pass it as a function argument:
function I = simpsons_integration_2D2(F, dx, dy, mx)
I = (dx*dy/9) * (mx.' * (F * mx));
end
EDIT:
If both mx and my can be precomputed the problem is reduced to a dot product:
m = reshape (my.' .* mx.', 1, []);
function I = simpsons_integration_2D3(F, dx, dy, m)
I = (dx*dy/9) * (m * F(:));
end
Well I cannot test this for you but there are a few things that may help.
First the axis 1 and then the the axis 2 may make some difference in terms of locality of the modified terms (I don't know if to better or to worse).
function I = variation1(F, dx, dy, Nx, Ny)
% Sum each term separately, prevents the creation of a big intermediate matrix
% Multiply outside the summation does only Ny multiplications by 4 instead of Ny*Nx/2
sX = sum(F(:,1:2:Nx-2), 2) + 4*sum(F(:,2:2:(Nx-1)), 2) + sum(F(:,3:2:Nx), 2);
IX = dx/3 * sX;
sY = sum(IX(1:2:Ny-2), 1) + 4*sum(IX(2:2:(Ny-1)), 1) + sum(IX(3:2:Ny) , 1);
I = dy/3 * sY;
end
function I = variation2(F, dx, dy, Nx, Ny)
% a.
% Sum each term separately, prevents the creation of a big intermediate matrix
% Multiply outside the summation does only Ny multiplications by 4 instead of Ny*Nx/2
% b.
% Notice that the terms 2:3:NX-2 appear in two summations
% Saves Nx*Ny/2 additions at the expense of Ny multiplications by 2
sX = 2*sum(F(:,3:2:Nx-2), 2) + 4*sum(F(:,2:2:(Nx-1)), 2) + F(:,1) + F(:,Nx);
% saves Ny multiplications by moving the constant factor after the next sum
sY = 2*sum(sX(3:2:Ny-2), 1) + 4*sum(sX(2:2:(Ny-1)), 1) + sX(1) + sX(Ny);
I = (dy*dy/9) * sY;
end
function I = alternate_simpsons_integration_2D(F, dx, dy, Nx, Ny)
% Integrate the 2D function F with Nx columns and Ny rows, and grid spacings
% dx and dy using Simpson's rule.
% Notice that sum(F(:,1:2:Nx-2) + F(:,3:2:Nx)) have all but the end poitns repeated.
IX = 4*sum(F(:,2:2:Nx-1), 2) + 2 * sum(F(:,3:2:Nx-2) , 2) + F(:,1) + F(:,Nx);
disp(size(IX))
% Integrate along y direction --> I is a scalar afterwards
sY = 4*sum(IX(2:2:Ny-1)) + 2*sum(IX(3:2:Ny-2)) + IX(1) + IX(Ny);
I = dy*dy/9 * sY;
end
If you think it is better to make a single summation then you can do using the formula 2*(sum(2*F(2:2:end-1) + F(1:2:end-2)) + F(end) - F(1) that gives the same result but has Nx*Ny/2 less additions on the first integration. But these options have to be tested in your environment.
Transposed implementation
function I = transposed_simpsons_integration_2D(F, dx, dy, Nx, Ny)
sY = 2*sum(2*F(2:2:end-1, :) + F(1:2:end-2, :), 1) + F(end, :) - F(1, :);
sX = 2*sum(2*sY(2:2:end-1) + sY(1:2:end-2)) + sY(end) - sY(1);
I = dy*dy/9 * sX;
end
Using octave (usually slower than matlab) I get a run time of ~400us per iteration with. This is not the type of workload that will be interesting to run on the GPU. For comparison, randn about 10 times slower than this function.

How to compute the derivatives automatically within for-loop in Matlab?

For the purpose of generalization, I hope Matlab can automatically compute the 1st & 2nd derivatives of the associated function f(x). (in case I change f(x) = sin(6x) to f(x) = sin(8x))
I know there exists built-in commands called diff() and syms, but I cannot figure out how to deal with them with the index i in the for-loop. This is the key problem I am struggling with.
How do I make changes to the following set of codes? I am using MATLAB R2019b.
n = 10;
h = (2.0 * pi) / (n - 1);
for i = 1 : n
x(i) = 0.0 + (i - 1) * h;
f(i) = sin(6 * x(i));
dfe(i) = 6 * cos(6 * x(i)); % first derivative
ddfe(i) = -36 * sin(6 * x(i)); % second derivative
end
You can simply use subs and double to do that. For your case:
% x is given here
n = 10;
h = (2.0 * pi) / (n - 1);
syms 'y';
g = sin(6 * y);
for i = 1 : n
x(i) = 0.0 + (i - 1) * h;
f(i) = double(subs(g,y,x(i)));
dfe(i) = double(subs(diff(g),y,x(i))); % first derivative
ddfe(i) = double(subs(diff(g,2),y,x(i))); % second derivative
end
By #Daivd comment, you can vectorize the loop as well:
% x is given here
n = 10;
h = (2.0 * pi) / (n - 1);
syms 'y';
g = sin(6 * y);
x = 0.0 + ((1:n) - 1) * h;
f = double(subs(g,y,x));
dfe = double(subs(diff(g),y,x)); % first derivative
ddfe = double(subs(diff(g,2),y,x)); % second derivative

Translate an equation into a code

Can you help me please translate this equation into a code?
I tried this code but its only the first part
theSum = sum(M(:, y) .* S(:, y) ./ (1 + K(:, y)))
EDIT: sorry, I was having a brainfart. The answer below makes no assumptions on the nature of M, K, etc, which is why I recommended functions like that. But they're clearly matrices. I'll make another answer, I'll leave this here for reference though in case it's useful
I would start by making the M, K, X, L, and O expressions into simple functions, so that you can easily call them as M(z,y), X(z,y) (or X(z,j) depending on the input you need ) etc.
Then you will convert each summation into a for loop and collect the result (you can think about vectorisation later, right now focus on translating the problem). The double summation is essentially a nested for loop, where the result of the inner loop is used in the outer one at each outer iteration.
So your end result should look something like:
Summation1 = 0;
for z = 1 : Z
tmp = M(z,y) / K(z,y) * (X(z,y) / (1 + L(z,y));
Summation1 = Summation1 + tmp;
end
Summation2 = 0;
for j = 1 : Y
if j ~= y
for z = 1 : Z
tmp = (M(z,j) * X(z,j) * O(j)) / (K(z,j)^2 * (1 + L(z,j)) * X(z,y);
Summation2 = Summation2 + tmp;
end
end
end
Result = Summation1 - Summation2;
(Btw, this assumes that all operations are on scalars. If M(z,y) outputs a vector, adjust for elementwise operations appropriately)
IF M, K, etc are all matrices, and all operations are expected to be element-wise, then this is a vectorised approach for this equation.
Left summation is
S1 = M(1:Z,y) ./ K(1:Z,y) .* X(1:Z,y) ./ (1 + L(1:Z,y));
S1 = sum(S1);
Right summation is (assuming (O is a horizontal vector)
S2 = M(1:Z, 1:Y) .* X(1:X, 1:Y) .* repmat(O(1:Y), [Z,1]) ./ ...
(K(1:Z, 1:Y) .^ 2 .* (1 + L(1:Z, 1:Y))) .* X(1:Z, 1:Y);
S2(:,y) = []; % remove the 'y' column from the matrix
S2 = sum(S2(:)); % add all elements
End result: S1 - S2
this is lambda version vectorized:
equation = #(y,M,K,X,L,O) ...
sum(M(:,y)./K(:,y).*X(:,y)./(1+L(:,y))) ...
-sum(sum( ...
bsxfun( ...
#times ...
,M(:,[1:y-1,y+1:end]) ...
.* X(:,[1:y-1,y+1:end]) ...
.* O(:,[1:y-1,y+1:end]) ...
./ (K(:,[1:y-1,y+1:end]) .^ 2 ...
.*(1+ L(:,[1:y-1,y+1:end]))) ...
,X(:,y) ...
) ...
));
%%% example:
y = 3;
Y = 5;
Z = 10;
M = rand(Y, Z);K = rand(Y, Z);X = rand(Y, Z);L = rand(Y, Z);O = rand(Y, Z);
equation(y,M,K,X,L,O)

Finding the minimum of a function over an interval

Upon request by Martin here is the basic problem. There is a function M(x) which is supposed to be minimized over the interval [lb, ub].
M = #(x) (a_1 * x + b_1) * (log((a_1 * x + b_1)/P_1) + X_u)...
+ (a_2 * x + b_2) * (log((a_2 * x + b_2)/P_2) + X_m)...
+ x * (log(x / P_3) + X_d);
lb = max(0, -b_1 / a_1);
ub = -b_2 / a_2;
where the inputs are:
P_1 = 0.6;
P_2 = 0.2;
P_3 = 0.2;
a_1 = 0.7071;
a_2 = -1.7071;
b_1 = 0.0245;
b_2 = 0.9755;
X_u = 44;
X_m = 2.9949;
X_d = 0;
The other option would be to solve for the root of the equation m_dash:
m_dash = #(x) log(((a_1 .* x + b_1).^a_1) .* ((a_2 .* x + b_2).^a_2) .* x)...
- log((P_1.^a_1) .* (P_2.^a_2) .* P_3) + a_1 .* X_u + a_2 .* X_m + X_d;
Any help would be greatly appreciated.
If you want to minimize a function over a certain interval, you can use the fminbnd function from the Optimization Toolbox. If you don't have that toolbox installed, you can either try a free alternative, or instead coerce the built-in function fminsearch to only return results from the interval:
rlv = 1e12; % ridiculously large value
M_hacked= #(x) rlv*((x < lb) + (x > ub)) + M(x);
x_min = fminsearch(M_hacked, (lb + ub)/2)
I introduced a new function, M_hacked, which returns ridiculously large values for x outside of the interval.
This is not be the most elegant solution, but it should do for your problem.