slow calculation of General Polynomial Equation - matlab

I'm trying to make a General Polynomial Function which given the time period to calculate it, the highest Polynomial power powr, and each constant a; which a and powr are the same length.
My approach of the code is the following: where each element of time is transformed into a vector from powr then when multiplied element by element with a, then calculating the sum of the resulting vector to make it one element.
for i=1:length(time)
result(i)=sum((time(i).^[powr]).*[a]);
end
Problem is that It takes way too long to do this calculation the more elements time has and/or the longer a and powr are. Is there a way to do this calculation faster ?

Armia:
The reason the code runs slowly is that the computation is not polynomially factored, meaning that it doesn't take advantage of the running products. Say for example your polynomial is of degree 3 (i.e., y = a*x^3 + b*x^2 + c*x + d). The way your code is set up makes it so that x (analogous to your time variable) would first be cubed (3 multiplications + 1 to multiply by a), then squared (2 multiplications + 1 to multiply by b), then referenced (0 multiplications + 1 to multiply by c), and finally, you add d. This amounts to 9 multiplications and 3 additions. Evaluating polynomials this way requires sum(1:n) multiplications and n additions.
If instead one factors the polynomial as: y = ((a*x + b)*x + c)*x + d the number of multiplications goes down to 3 (from 9) and the additions remain at 3. In fact, the polynomial factoring approach evaluates the polynomial with n multiplications and n additions (n is the order of the polynomial). Thus, one can see that polynomial factoring scales in computational effort significantly slower than, let's call it, the brute force approach.
To do this for higher degree polynomials, I suggest you modify the code to:
N = length(time); %Get number of time values on which the polynomial evaluation is needed.
hmp = length(a); %I'm assuming a contains the polynomial coefficients in ascending order.
result = ones(N,1)*a(end); %Preallocate memory for results.
for i=hmp-1:-1:1
result = result.*time + a(i); %Compute the factored parenthesis 'outward'
end
Where N is the number of time values at which you want to evaluate the polynomial, hmp is the highest magnitude power. Making result a vector makes the loop compute the polynomial for all your time entries simultaneously. This for loop takes advantage of polynomial factoring, which will scale much more lovely than unnecessarily having to compute powers from scratch element by element.

Related

Why does treating the index as a continuous variable not work when performing an inverse discrete Fourier transform?

I have a set of points describing a closed curve in the complex plane, call it Z = [z_1, ..., z_N]. I'd like to interpolate this curve, and since it's periodic, trigonometric interpolation seemed a natural choice (especially because of its increased accuracy). By performing the FFT, we obtain the Fourier coefficients:
F = fft(Z);
At this point, we could get Z back by the formula (where 1i is the imaginary unit, and we use (k-1)*(n-1) because MATLAB indexing starts at 1)
N
Z(n) = (1/N) sum F(k)*exp( 1i*2*pi*(k-1)*(n-1)/N), 1 <= n <= N.
k=1
My question
Is there any reason why n must be an integer? Presumably, if we treat n as any real number between 1 and N, we will just get more points on the interpolated curve. Is this true? For example, if we wanted to double the number of points, could we not set
N
Z_new(n) = (1/N) sum F(k)*exp( 1i*2*pi*(k-1)*(n-1)/N), with n = 1, 1.5, 2, 2.5, ..., N-1, N-0.5, N
k=1
?
The new points are of course just subject to some interpolation error, but they'll be fairly accurate, right? The reason I'm asking this question is because this method is not working for me. When I try to do this, I get a garbled mess of points that makes no sense.
(By the way, I know that I could use the interpft() command, but I'd like to add points only in certain areas of the curve, for example between z_a and z_b)
The point is when n is integer, you have some primary functions which are orthogonal and can be as a basis for the space. When, n is not integer, The exponential functions in the formula, are not orthogonal. Hence, the expression of a function based on these non-orthogonal basis is not meaningful as much as you expected.
For orthogonality case you can see the following as an example (from here). As you can check, you can find two n_1 and n_2 which are not integer, the following integrals are not zero any more, and they are not orthogonal.

What's a floating-point operation and how to count them (in MATLAB)?

I have an assignment where I basically need to count the number of floating point operations in a simple program, which involves a loop, a matrix, and operations such as *, + and ^.
From my understanding, a floating-point operation is an operation that involves floating-point numbers, and we may be interested in counting these operations because I think they may be more expensive for the computer. If you want to add more details to this part, it would be nice.
My problem is that I've no idea of knowing exactly which operations involve floating-point numbers, unless I use functions, such as isfloat. In that case, would just one of the numbers in the operation be necessary to be floating-point to the operation be considered a floating-point operation, right? If not, why? Can you add more details on this?
For example, suppose I've the following simple function:
function [r, n] = naive(c, x)
% c is the vector of coefficients of the polynomial
% The coeffiecients should be given as follows
% c(1) = coefficient of x^0 (or 1).
% c(length(c)) = coefficient of the largest power of x
% x is the point to evaluate the polynomial at
% r is the result of the evaluation
% (Assumes that the entries are integers)
r = c(1);
n = 0;
for i=2:length(c)
r = r + c(i) * x^(i - 1);
n = n + 2 + (i - 1);
end
end
which basically calculates a normal polynomial evaluated at x given the coefficients in a vector c.
As you can see from the code, n is actually keeping track of floating-point operations. But actually, I'm counting every mathematical operation (except the assignment) as a floating-point operation, but this of course might not be right, or is it? If yes or no, why?
Both the coefficients and c might be floating-point numbers. So, instead of counting every operation as a floating point operation, should we first check with for example isfloat if the numbers are floating point, and only then increment n?
Note, I'm aware of the function flops, which, from what I understood, it should count the floating-point operations, but it's deprecated, and mostly I would like to learn better these concepts, and therefore try to count them manually.
Thanks for any help!

Minimize quadratic form energy using matlab. Which function should I use?

I'm new to matlab and try to do some energy minimization work with it. The energy function takes a 3-channel image as input. For every channel, there's a energy term looks like this:
E = x'Ax + ||Bx||^2 + w*||x-c||^2,
where x,c are vectors of length N, A is a matrix of size N*N. A is sparse and positive semi-definite and has 25 non-zero elements per row, giving constraints to all elements of x. B is of size M*N. B is sparse too and has 2 non-zero elements per row. N is about 850,000. M is about 1,000,000. Although B gives more than N constraints, some elements of x have nothing to do with ||Bx||^2 term. The weight w of term ||x-c||^ is quite small, say 1e-3.
I've searched matlab documentation. It looks like I should use lsqnonlin for this problem. Is there a special designed function or option for quadratic form minimization in matlab?
For those who are familiar with computer vision literature, I'm actually trying to implement the algorithm in "Coherent Intrinsic Images from Photo Collections". The authors said they use matlab backslash operator to minimize the energy, but I can't see how a backslash operator can be used in quadratic form problem.
Yes, there is a function specifically for optimizing quadratic cost functions: quadprog. However, if you don't have any linear constraints, then you should be able to write your cost function as
E = x'Mx/2 + vx + k
Finding the point of zero gradient (hopefully a minimum) can then be achieved by taking first derivatives:
dE/dx = Mx + v
setting them to zero giving the solution:
x = -M\v

Calculating an inverse matrix in Matlab

I'm running an optimization algorithm that requires calculation of the inverse of a matrix. The goal of the algorithm is to eliminate negative values from the matrix A and obtain the new matrix B. Basically, I start with known square matrices B and C of the same size.
I start by calculating the matrix A which is equal to:
A = B^-1 * C
Or in Matlab:
A = B\C;
I use this because Matlab told me B\C is more accurate than inv(B)*C.
The negative values in A are then divided by two and A is then normalised so that it's rows have length of 1. Using this new A, I calculate a new B with:
(1/N) * A * C' = B^-1
where N is just a scaling factor (# of columns in A). This new B would then be used again in the first step and these iterations continue until the negatives in A are gone.
My problem is I have to calculate B from the second equation and then normalise it.
invB = (1/N)*A*C';
B = inv(invB);
I've been calculating B using inv(B^-1) but after a few iterations I start getting messages that B^-1 is "close to singular or badly scaled."
This algorithm actually works for smaller matrices (around 70x70) but when it gets up to about 500x500 I start getting these messages.
Are there any better ways to calculate inv(B^-1)?
You should definitely head warnings about singular matrices. Results in numerical linear algebra tend to break down as you move toward matrices with high condition numbers. The underlying idea is if
A*b_1 = c
and we're actually solving the problem (because we are using approximate numbers when we use computers)
(A + matrix error)*b_2 = (c + vector error)
how close are b_1 and b_2 as a function of the matrix and vector errors? When A has small condition number b_1 and b_2 are close. When A has large condition number b_1 and b_2 are not close.
There is an informative piece of analysis you could do on your algorithm. At each iteration, after you've found B, find use Matlab to find the condition number of it. This is
cond(B)
You will likely see the number climb rapidly. This indicates that every time you iterate your algorithm, you should trust your result for B less and less.
Problems like this crop up all the time in numerical mathematics. If you'll be working with numerical algorithms frequently you should take some time to familiarize with the role of condition numbers in the field and preconditioning techniques as mentioned above. My preferred text for this is "Numerical Linear Algebra" by Lloyd Trefethen, but any text on Numerical Algebra should address some of these issues.
Best of luck,
Andrew
The main issue is that your matrix has a high condition number (i.e. really small rcond(B) in your case). This is due to the iterative structure in your algorithm, I guess. As you do each iteration your small singular values get smaller and smaller so your condition number grows exponentially. You should check preconditioning to avoid this kind of behavior.

Faster projected-norm (quadratic-form, metric-matrix...) style computations

I need to perform lots of evaluations of the form
X(:,i)' * A * X(:,i) i = 1...n
where X(:,i) is a vector and A is a symmetric matrix. Ostensibly, I can either do this in a loop
for i=1:n
z(i) = X(:,i)' * A * X(:,i)
end
which is slow, or vectorise it as
z = diag(X' * A * X)
which wastes RAM unacceptably when X has a lot of columns. Currently I am compromising on
Y = A * X
for i=1:n
z(i) = Y(:,i)' * X(:,i)
end
which is a little faster/lighter but still seems unsatisfactory.
I was hoping there might be some matlab/scilab idiom or trick to achieve this result more efficiently?
Try this in MATLAB:
z = sum(X.*(A*X));
This gives results equivalent to Federico's suggestion using the function DOT, but should run slightly faster. This is because the DOT function internally computes the result the same way as I did above using the SUM function. However, DOT also has additional input argument checks and extra computation for cases where you are dealing with complex numbers, which is extra overhead you probably don't want or need.
A note on computational efficiency:
Even though the time difference is small between how fast the two methods run, if you are going to be performing the operation many times over it's going to start to add up. To test the relative speeds, I created two 100-by-100 matrices of random values and timed the two methods over many runs to get an average execution time:
METHOD AVERAGE EXECUTION TIME
--------------------------------------------
Z = sum(X.*Y); 0.0002595 sec
Z = dot(X,Y); 0.0003627 sec
Using SUM instead of DOT therefore reduces the execution time of this operation by about 28% for matrices with around 10,000 elements. The larger the matrices, the more negligible this difference will be between the two methods.
To summarize, if this computation represents a significant bottleneck in how fast your code is running, I'd go with the solution using SUM. Otherwise, either solution should be fine.
Try this:
z = dot(X, A*X)
I don't have Matlab here to test, but it works on Octave, so I expect Matlab to have an analogous dot() function.
From Octave's help:
-- Function File: dot (X, Y, DIM)
Computes the dot product of two vectors. If X and Y are matrices,
calculate the dot-product along the first non-singleton dimension.
If the optional argument DIM is given, calculate the dot-product
along this dimension.
For completeness, gnovice's answer in Scilab would be
z = sum(X .* Y, 1)'