I want to solve the following integral by MATLAB. I written the following code
function f = LO_scheme2a(r,kk)
% Run the Prog by : q = integral(#(r)LO_scheme2a(r,kk), 0, inf)
%% System parmaeters
lambda_m= 10^(-2);
lambda_f= kk.*lambda_m; %can be change based on senario
mu= 10^-4; %user density (find the proper user density or scenario based)
Pdbm_m=43;
P_m=10.^(Pdbm_m./10);
Pdbm_f=20;
P_f=10.^(Pdbm_f./10);
gammadBm_m=0;
gamma_m=10.^(gammadBm_m./10);
gammadBm_f=0;
gamma_f=10.^(gammadBm_f./10);
alpha=4;
k=(P_f/P_m)^(1/alpha);% chose proper value
Pdbm_req=0; % the optimum one should be calculate
P_req=10.^(Pdbm_req./10);
gammadBm_0=-40;
gamma_0=10.^(gammadBm_0./10);
%%
%% scheme 2
P_t=P_f;
C0=1; %By changing C_0 you can investigate the performance of system
Theta_0=C0.* P_f;
%%
D1=(P_req./gamma_0).^(1./alpha);
D2=(Theta_0./gamma_0./P_t).^(1./alpha).*r;
D=D2;
lambda_fnew=lambda_f.*exp(-pi.*mu.*D.^2);
%%
PM_u=lambda_m./(lambda_m+k.^2.*lambda_fnew);
PF_u=lambda_fnew.*k.^2./(lambda_m+k.^2.*lambda_fnew);
%%
fm_M=2.*pi.*r.*(lambda_m+k.^2.*lambda_fnew).*exp(-pi.*r.^2.*(lambda_m+k.^2.*lambda_fnew));
ff_F=2.*pi.*r.*(lambda_m./k.^2+lambda_fnew).*exp(-pi.*r.^2.*(lambda_m./k.^2+lambda_fnew));
%%
%% Interfernce from MBSs to MU an FU
A_i= 1;
S_i=gamma_m .*r.^alpha;
LIm_m=exp(-pi.*lambda_m.*(A_i.*r).^2.*rho1(gamma_m,alpha));
%LIm_m=exp(-pi.*lambda_m.*(A_i.*r).^2.*rho0(S_i./(A_i.*r).^alpha,alpha));
A_i= 1./k;
S_i=gamma_f .*r.^alpha *P_m/P_f;
LIf_m=exp(-pi.*lambda_m.*(A_i.*r).^2.*rho1(gamma_f.*P_m.*k.^alpha./P_f,alpha));
%LIf_m=exp(-pi.*lambda_m.*(A_i.*r).^2.*rho0(S_i./(A_i.*r).^alpha,alpha));
%%
max_x= 10^5;
%% Expectiation over h in lemmma 3 : Interfernce from FAPs to MU an FU
S_i=gamma_f .*r.^alpha .*P_f./P_m;
z=(P_req/gamma_0).^(1/alpha);
%MIm_f=(exp(-h).*exp(-mu.*pi.*z^2.*h.^(2./alpha))).*((-((z.^2).*h^(2/alpha)).*(1-exp(-S_i.*z.^(-alpha))))+(S_i.*h).^(2./alpha).*real(gammainc(1-(2./alpha),S_i.*z.^(-alpha))));
for idx = 1:numel(r)
MIm_f(idx)= integral(#(h)exp(-h).*((exp(-mu.*pi.*z^2.*h.^(2./alpha))).*((-((z.^2).*h.^(2./alpha)).*(1-exp(-S_i.*z.^(-alpha))))+(S_i.*h).^(2./alpha).*real(gammainc(1-(2./alpha),S_i.*z.^(-alpha))))),0,max_x);
end
% %
S_i=gamma_f .*r.^alpha ;
z=(P_req./gamma_0).^(1/alpha);
for idx = 1:numel(r)
MIf_f(idx)= integral(#(h)(exp(-h).*exp(-mu.*pi.*z^2.*h.^(2./alpha))).*((-((z.^2).*h.^(2./alpha)).*(1-exp(-S_i.*z.^(-alpha))))+(S_i.*h).^(2./alpha).*real(gammainc(1-(2./alpha),S_i.*z.^(-alpha)))),0,max_x);
end
%%
f=PM_u.*LIm_m.*exp(-2.*pi.*lambda_f.*MIm_f).*fm_M+PF_u.*LIf_m.*exp(-2.*pi.*lambda_f.*MIf_f).*ff_F;
when
function [ f_rho ] = rho1( x,alpha)
f_rho=x.^(2./alpha).*(pi./2-atan(x.^(-2./alpha)));
end
I can run it for different kk but for some kk, it gives me error. for example when kk=10, I can run this program by
q = integral(#(r)LO_scheme2a(r,kk), 0, inf)
but for kk=100, it gives me some error (Error using .*
Matrix dimensions must agree.)
would you please help me to solve this error. Thanks
The problem resides in the lines
for idx = 1:numel(r)
MIm_f(idx)= integral(#(h)exp(-h).*((exp(-mu.*pi.*z^2.*h.^(2./alpha))).*((-((z.^2).*h.^(2./alpha)).*(1-exp(-S_i.*z.^(-alpha))))+(S_i.*h).^(2./alpha).*real(gammainc(1-(2./alpha),S_i.*z.^(-alpha))))),0,max_x);
end
The outer integral will expand S_i into an horizontal vector, and the inner integral will do the same with h, which means that the multiplications can only be carried out whenever both sizes happen to be the same, and giving wrong results. I would put a transpose on h to let it expand on the vertical dimension, and match the multiplications with bsxfun instead of .*, as follows:
for idx = 1:numel(r)
MIm_f(idx)= integral(#(h) ...
bsxfun(#times, exp(-h.'), ...
bsxfun(#times,(exp(-mu.*pi.*z^2.*(h.').^(2./alpha))), ...
bsxfun(#times, -((z.^2).*(h.').^(2./alpha)), (1-exp(-S_i.*z.^(-alpha)))) + ...
bsxfun(#times, bsxfun(#times, S_i, h.').^(2./alpha), real(gammainc(1-(2./alpha),S_i.*z.^(-alpha))))) ...
) ...
,0,max_x);
end
Do the same for the two inner integrals.
NOTE: I don't have the integral function, but this approach works for me with quadv (you need an integration command able to handle vector functions).
Related
I'm trying to run a program in matlab to obtain the direct and inverse DFT for a grey scale image, but I'm not able to recover the original image after applying the inverse. I'm getting complex numbers as my inverse output. Is like i'm losing information. Any ideas on this? Here is my code:
%2D discrete Fourier transform
%Image Dimension
M=3;
N=3;
f=zeros(M,N);
f(2,1:3)=1;
f(3,1:3)=0.5;
f(1,2)=0.5;
f(3,2)=1;
f(2,2)=0;
figure;imshow(f,[0 1],'InitialMagnification','fit')
%Direct transform
for u=0:1:M-1
for v=0:1:N-1
for x=1:1:M
for y=1:1:N
F(u+1,v+1)=f(x,y)*exp(-2*pi*(1i)*((u*(x-1)/M)+(v*(y-1)/N)));
end
end
end
end
Fab=abs(F);
figure;imshow(Fab,[0 1],'InitialMagnification','fit')
%Inverse Transform
for x=0:1:M-1
for y=0:1:N-1
for u=1:1:M
for v=1:1:N
z(x+1,y+1)=(1/M*N)*F(u,v)*exp(2*pi*(1i)*(((u-1)*x/M)+((v-1)*y/N)));
end
end
end
end
figure;imshow(real(z),[0 1],'InitialMagnification','fit')
There are a couple of issues with your code:
You are not applying the definition of the DFT (or IDFT) correctly: you need to sum over the original variable(s) to obtain the transform. See the formula here; notice the sum.
In the IDFT the normalization constant should be 1/(M*N) (not 1/M*N).
Note also that the code could be made mucho more compact by vectorization, avoiding the loops; or just using the fft2 and ifft2 functions. I assume you want to compute it manually and "low-level" to verify the results.
The code, with the two corrections, is as follows. The modifications are marked with comments.
M=3;
N=3;
f=zeros(M,N);
f(2,1:3)=1;
f(3,1:3)=0.5;
f(1,2)=0.5;
f(3,2)=1;
f(2,2)=0;
figure;imshow(f,[0 1],'InitialMagnification','fit')
%Direct transform
F = zeros(M,N); % initiallize to 0
for u=0:1:M-1
for v=0:1:N-1
for x=1:1:M
for y=1:1:N
F(u+1,v+1) = F(u+1,v+1) + ...
f(x,y)*exp(-2*pi*(1i)*((u*(x-1)/M)+(v*(y-1)/N))); % add term
end
end
end
end
Fab=abs(F);
figure;imshow(Fab,[0 1],'InitialMagnification','fit')
%Inverse Transform
z = zeros(M,N);
for x=0:1:M-1
for y=0:1:N-1
for u=1:1:M
for v=1:1:N
z(x+1,y+1) = z(x+1,y+1) + (1/(M*N)) * ... % corrected scale factor
F(u,v)*exp(2*pi*(1i)*(((u-1)*x/M)+((v-1)*y/N))); % add term
end
end
end
end
figure;imshow(real(z),[0 1],'InitialMagnification','fit')
Now the original and recovered image differ only by very small values, of the order of eps, due to the usual floating-point inaccuacies:
>> f-z
ans =
1.0e-15 *
Columns 1 through 2
0.180411241501588 + 0.666133814775094i -0.111022302462516 - 0.027755575615629i
0.000000000000000 + 0.027755575615629i 0.277555756156289 + 0.212603775716506i
0.000000000000000 - 0.194289029309402i 0.000000000000000 + 0.027755575615629i
Column 3
-0.194289029309402 - 0.027755575615629i
-0.222044604925031 - 0.055511151231258i
0.111022302462516 - 0.111022302462516i
Firstly, the biggest error is that you are computing the Fourier transform incorrectly. When computing F, you need to be summing over x and y, which you are not doing. Here's how to rectify that:
F = zeros(M, N);
for u=0:1:M-1
for v=0:1:N-1
for x=1:1:M
for y=1:1:N
F(u+1,v+1)=F(u+1,v+1) + f(x,y)*exp(-2*pi*(1i)*((u*(x-1)/M)+(v*(y-1)/N)));
end
end
end
end
Secondly, in the inverse transform, your bracketing is incorrect. It should be 1/(M*N) not (1/M*N).
As an aside, at the cost of a bit more memory, you can speed up the computation by not nesting so many loops. Namely, when computing the FFT, do the following instead
x = (1:1:M)'; % x is a column vector
y = (1:1:N) ; % y is a row vector
for u = 0:1:M-1
for v = 0:1:N-1
F2(u+1,v+1) = sum(f .* exp(-2i * pi * (u*(x-1)/M + v*(y-1)/N)), 'all');
end
end
To take this method to the extreme, i.e. not using any loops at all, you would do the following (though this is not recommended, since you would lose code readability and the memory cost would increase exponentially)
x = (1:1:M)'; % x is in dimension 1
y = (1:1:N) ; % y is in dimension 2
u = permute(0:1:M-1, [1, 3, 2]); % x-freqs in dimension 3
v = permute(0:1:N-1, [1, 4, 3, 2]); % y-freqs in dimension 4
% sum the exponential terms in x and y, which are in dimensions 1 and 2.
% If you are using r2018a or older, the below summation should be
% sum(sum(..., 1), 2)
% instead of
% sum(..., [1,2])
F3 = sum(f .* exp(-2i * pi * (u.*(x-1)/M + v.*(y-1)/N)), [1, 2]);
% The resulting array F3 is 1 x 1 x M x N, to make it M x N, simply shiftdim or squeeze
F3 = squeeze(F3);
Can anyone explain the two lines of code highlighted below which use repmat? This is taken directly from the MathWorks documentation for learning data analysis:
bin_counts = hist(c3); % Histogram bin counts
N = max(bin_counts); % Maximum bin count
mu3 = mean(c3); % Data mean
sigma3 = std(c3); % Data standard deviation
hist(c3) % Plot histogram
hold on
plot([mu3 mu3],[0 N],'r','LineWidth',2) % Mean
% --------------------------------------------------------------
X = repmat(mu3+(1:2)*sigma3,2,1); % WHAT IS THIS?
Y = repmat([0;N],1,2); % WHY IS THIS NECESSARY?
% --------------------------------------------------------------
plot(X,Y,'g','LineWidth',2) % Standard deviations
legend('Data','Mean','Stds')
hold off
Could anyone explain the X = repmat(...) line to me? I know it will be plotted for the 1 and 2 standard deviation lines.
Also, I tried commenting out the Y = ... line, and the plot looks the exact same, so what is the purpose of this line?
Thanks
Lets break the expression into multiple statements
X = repmat(mu3+(1:2)*sigma3,2,1);
is equivalent to
% First create a row vector containing one and two standard deviations from the mean.
% This is equivalent to xvals = [mu3+1*sigma3, mu3+2*sigma3];
xval = mu3 + (1:2)*sigma3;
% Repeat the matrix twice in the vertical dimension. We want to plot two vertical
% lines so the first and second point should be equal so we just use repmat to repeat them.
% This is equivalent to
% X = [xvals;
% xvals];
X = repmat(xval,2,1);
% To help understand how repmat works, if we had X = repmat(xval,3,2) we would get
% X = [xval, xval;
% xval, xval;
% xval, xval];
The logic is similar for the Y matrix except it repeats in the column direction. Together you end up with
X = [mu3+1*sigma3, mu3+2*sigma3;
mu3+1*sigma3, mu3+2*sigma3];
Y = [0, 0;
N, N];
When plot is called it plots one line per column of the X and Y matrices.
plot(X,Y,'g','LineWidth',2);
is equivalent to
plot([mu3+1*sigma3; mu3+1*sigma3], [0, N], 'g','LineWidth',2);
hold on;
plot([mu3+2*sigma3; mu3+2*sigma3], [0, N], 'g','LineWidth',2);
which plots two vertical lines, one and two standard deviations from the mean.
If you comment out Y then Y isn't defined. The reason the code still worked is probably that the previous value of Y was still stored in the workspace. If you run the command clear before running the script again you will find that the plot command will fail.
I did it two ways, why is the first way(starting on line with mu=mean(X) not working? what's the difference?
function [X_norm, mu, sigma] = featureNormalize(X)
%FEATURENORMALIZE Normalizes the features in X
% FEATURENORMALIZE(X) returns a normalized version of X where
% the mean value of each feature is 0 and the standard deviation
% is 1. This is often a good preprocessing step to do when
% working with learning algorithms.
% You need to set these values correctly
X_norm = X;
mu = zeros(1, size(X, 2));
sigma = zeros(1, size(X, 2));
% ====================== YOUR CODE HERE ======================
% Instructions: First, for each feature dimension, compute the mean
% of the feature and subtract it from the dataset,
% storing the mean value in mu. Next, compute the
% standard deviation of each feature and divide
% each feature by it's standard deviation, storing
% the standard deviation in sigma.
%
% Note that X is a matrix where each column is a
% feature and each row is an example. You need
% to perform the normalization separately for
% each feature.
%
% Hint: You might find the 'mean' and 'std' functions useful.
%
%mu=mean(X)
%X_norm=X-mu;
%sigma=std(X_norm)
%X_norm(1)=X_norm(1)/sigma(1)
%X_norm(2)=X_norm(2)/sigma(2)
% Calculates mean and std dev for each feature
for i=1:size(mu,2)
mu(1,i) = mean(X(:,i));
sigma(1,i) = std(X(:,i));
X_norm(:,i) = (X(:,i)-mu(1,i))/sigma(1,i);
end
% ============================================================
end
The reason is because you try to subtract a vector from a matrix. mean(X) gives you a vector with the mean in the columns of X, dimension [1xC], and the X is dimension [RxC]. A way to solve this in a oneliner is
X = (X-repmat(mean(X,1),size(X,1),1))./repmat(std(X,0,1),size(X,1),1)
You need to loop through X. You can further verify the output of above code with normalize(X)
for i = 1: size(X, 2)
mu = mean(X(:, i));
sigma = std(X(:, i));
X_norm(:, i) = (X(:, i) - mu) ./ sigma
end
Edit: Some time after I asked this question, an R package called MonoPoly (available here) came out that does exactly what I want. I highly recommend it.
I have a set of points I want to fit a curve to. The curve must be monotonic (never decreasing in value) i.e. the curve can only go upward or stay flat.
I originally had been polyfitting my results and this had been working great until I found a particular dataset. The polyfit for data in this dataset was non-monotonic.
I did some research and found a possible solution in this post:
Use lsqlin. Constrain the first derivative to be non-negative at both
ends of the domain of interest.
I'm coming from a programming rather than math background so this is a little beyond me. I don't know how to constrain the first derivative to be non-negative as he said. Also, I think in my case I need a curve so I should use lsqcurvefit but I don't know how to constrain it to produce monotonic curves.
Further research turned up this post recommending lsqcurvefit but I can't figure out how to use the important part:
Try this non-linear function F(x) also. You use it together with
lsqcurvefit but it require a start guess on the parameters. But it is
a nice analytic expression to give as a semi-empirical formula in a
paper or a report.
%Monotone function F(x), with c0,c1,c2,c3 varitional constants F(x)=
c3 + exp(c0 - c1^2/(4*c2))sqrt(pi)...
Erfi((c1 + 2*c2*x)/(2*sqrt(c2))))/(2*sqrt(c2))
%Erfi(x)=erf(i*x) (look mathematica) but the function %looks much like
x^3 %derivative f(x), probability density f(x)>=0
f(x)=dF/dx=exp(c0+c1*x+c2*x.^2)
I must have a monotonic curve but I'm not sure how to do it, even with all of this information. Would a random number be enough for a "start guess". Is lsqcurvefit best? How can I use it to produce a best fitting monotonic curve?
Thanks
Here is a simple solution using lsqlin. The derivative constrain is enforced in each data point, this could be easily modified if needed.
Two coefficient matrices are needed, one (C) for least square error calculation and one (A) for derivatives in the data points.
% Following lsqlin's notations
%--------------------------------------------------------------------------
% PRE-PROCESSING
%--------------------------------------------------------------------------
% for reproducibility
rng(125)
degree = 3;
n_data = 10;
% dummy data
x = rand(n_data,1);
d = rand(n_data,1) + linspace(0,1,n_data).';
% limit on derivative - in each data point
b = zeros(n_data,1);
% coefficient matrix
C = nan(n_data, degree+1);
% derivative coefficient matrix
A = nan(n_data, degree);
% loop over polynomial terms
for ii = 1:degree+1
C(:,ii) = x.^(ii-1);
A(:,ii) = (ii-1)*x.^(ii-2);
end
%--------------------------------------------------------------------------
% FIT - LSQ
%--------------------------------------------------------------------------
% Unconstrained
% p1 = pinv(C)*y
p1 = fliplr((C\d).')
p2 = polyfit(x,d,degree)
% Constrained
p3 = fliplr(lsqlin(C,d,-A,b).')
%--------------------------------------------------------------------------
% PLOT
%--------------------------------------------------------------------------
xx = linspace(0,1,100);
plot(x, d, 'x')
hold on
plot(xx, polyval(p1, xx))
plot(xx, polyval(p2, xx),'--')
plot(xx, polyval(p3, xx))
legend('data', 'lsq-pseudo-inv', 'lsq-polyfit', 'lsq-constrained', 'Location', 'southoutside')
xlabel('X')
ylabel('Y')
For the specified input the fitted curves:
Actually this code is more general than what you requested, since the degree of polynomial can be changed as well.
EDIT: enforce derivative constrain in additional points
The issue pointed out in the comments is due to that the derivative checks are enforced only in the data points. Between those no checks are performed. Below is a solution to alleviate this problem. The idea: convert the problem to an unconstrained optimization by using a penalty term.
Note that it is using a term pen to penalize the violation of the derivative check, thus the result is not a true least square error solution. Additionally, the result is dependent on the penalty function.
function lsqfit_constr
% Following lsqlin's notations
%--------------------------------------------------------------------------
% PRE-PROCESSING
%--------------------------------------------------------------------------
% for reproducibility
rng(125)
degree = 3;
% data from comment
x = [0.2096 -3.5761 -0.6252 -3.7951 -3.3525 -3.7001 -3.7086 -3.5907].';
d = [95.7750 94.9917 90.8417 62.6917 95.4250 89.2417 89.4333 82.0250].';
n_data = length(d);
% number of equally spaced points to enforce the derivative
n_deriv = 20;
xd = linspace(min(x), max(x), n_deriv);
% limit on derivative - in each data point
b = zeros(n_deriv,1);
% coefficient matrix
C = nan(n_data, degree+1);
% derivative coefficient matrix
A = nan(n_deriv, degree);
% loop over polynom terms
for ii = 1:degree+1
C(:,ii) = x.^(ii-1);
A(:,ii) = (ii-1)*xd.^(ii-2);
end
%--------------------------------------------------------------------------
% FIT - LSQ
%--------------------------------------------------------------------------
% Unconstrained
% p1 = pinv(C)*y
p1 = (C\d);
lsqe = sum((C*p1 - d).^2);
p2 = polyfit(x,d,degree);
% Constrained
[p3, fval] = fminunc(#error_fun, p1);
% correct format for polyval
p1 = fliplr(p1.')
p2
p3 = fliplr(p3.')
fval
%--------------------------------------------------------------------------
% PLOT
%--------------------------------------------------------------------------
xx = linspace(-4,1,100);
plot(x, d, 'x')
hold on
plot(xx, polyval(p1, xx))
plot(xx, polyval(p2, xx),'--')
plot(xx, polyval(p3, xx))
% legend('data', 'lsq-pseudo-inv', 'lsq-polyfit', 'lsq-constrained', 'Location', 'southoutside')
xlabel('X')
ylabel('Y')
%--------------------------------------------------------------------------
% NESTED FUNCTION
%--------------------------------------------------------------------------
function e = error_fun(p)
% squared error
sqe = sum((C*p - d).^2);
der = A*p;
% penalty term - it is crucial to fine tune it
pen = -sum(der(der<0))*10*lsqe;
e = sqe + pen;
end
end
Gradient free methods might be used to solve the problem by exactly enforcing the derivative constrain, for example:
[p3, fval] = fminsearch(#error_fun, p_ini);
%--------------------------------------------------------------------------
% NESTED FUNCTION
%--------------------------------------------------------------------------
function e = error_fun(p)
% squared error
sqe = sum((C*p - d).^2);
der = A*p;
if any(der<0)
pen = Inf;
else
pen = 0;
end
e = sqe + pen;
end
fmincon with non-linear constraint might be a better choice.
I let you to work out the details and to tune the algorithms. I hope that it is sufficient.
You'll have to be easy on me, I am new to matlab and SO. I am having an issue using the matlab solver to calculate internal rate of return(IRR). I saw that the financial toolbox in matlab had a function for this, however I don't believe I have it installed and did not want to get the trial version on their site.
Given the simple nature of my particular IRR calculation, I figured it would be easy enough to simply code in matlab. It is the same yearly cashflow, so what I put into matlab was as follows:
syms x k;
IRR = solve(investment == yrSavings* symsum((1+x)^-k,1, nYears));
It doesn't fail, and in fact gives a number. The only problem is the the result is incorrect! I type in the IRR manually and it never equals the investment. Using wolframalpha I found the actual solution, went back and manually typed in wolframalpha's answer, and the symsum function returned the correct result. I'm not sure what's up with the solver!
The way you have the formula written, the symbolic assumption is that you are using x as the iterator variable. I believe you want to use k. Try this:
syms x k;
IRR = solve(investment == yrSavings* symsum((1+x)^-k,k,1, nYears));
Another approach would be to use the MATLAB function roots to compute the discount factor and then convert that to an IRR. I happened to write such a function the other day, so I thought I might as well post it here for reference. It is heavily commented but the actual code is only three lines.
% Compute the IRR of a stream of cashflows represented as a vector. For example:
% > irr([-123400, 36200, 54800, 48100])
% ans = 0.059616
%
% If the provided stream of cashflows starts with a negative cashflow and all
% other cashflows are positive then `irr` will return a single scalar result.
%
% If multiple IRRs exist, all possible IRRs are returned as a column vector. For
% example:
% > irr([-1.0, 1.0, 1.1, 1.3, 1.0, -3.7])
% ans =
% 0.050699
% 0.824254
%
% If no IRRs exist, or if infinitely many IRRs exist, an empty array is
% returned. For example:
% > irr([1.0, 2.0, 3.0])
% ans = [](0x1)
%
% > irr([0.0])
% ans = [](0x1)
%
% Unlike Excel's IRR function no initial guess can be provided because all
% possible IRRs are returned anyway
%
% This function is not vectorized and will fail if called with a matrix argument
function irrs = irr(cashflows)
%% Overview
% The IRR is defined as the rate, r, such that:
%
% c0 + c1 / (1 + r) + c2 / (1 + r) ^ 2 + ... + cn / (1 + r) ^ n = 0
%
% where c0, c1, c2, ..., cn are the cashflows
%
% We define discount factors, d = 1 / (1 + r), so that the above becomes a
% polynomial in terms of the discount factors:
%
% c0 + c1 * d + c2 * d^2 + ... + cn * d^n = 0
%
% Such a polynomial will have n complex roots but between 0 and n real roots.
% In the important special case that c0 < 0 and c1, c2, ..., cn > 0 there
% will be exactly one real root.
%% Check that input is a vector, not a matrix
assert(isvector(cashflows), 'Input to irr must be a vector of cashflows');
%% Calculation of IRRs
% We use the built-in functions `roots` to compute all roots of the
% polynomial, which are the discount factors `d`. The user will provide a
% vector as [c0, c1, c2, ..., cn] but roots expects something of the form
% [cn, ..., c2, c2, c0] so we reverse the order of cashflows using (end:-1:1).
% At this stage `d` has n elements, most of which are likely complex.
d = roots(cashflows(end:-1:1));
% The function `roots` provides all roots, including complex ones. We are only
% interested in the real roots, so we filter out the complex roots here. There
% many also be spurious real roots less or equal to 0, which we also filter
% out. Now `rd` could have between 0 and n elements but is likely to have a
% single element
rd = d(imag(d) == 0.0 & real(d) > 0);
% We have solved everything in terms of discount factors, so we convert to
% rates by inverting the defining formula. Since the discount factors are
% real and positive, the IRRs are real and greater than -100%.
irrs = 1.0 ./ rd - 1.0;
end