Polynomial with Specific roots - polynomial-math

Is there a way to find a polynomial having specific roots i.e., 17, 29, 33 etc. The polynomial should satisfy for all these values.
Is there any programming library available to achieve this.

(x-17)(x-29)(x-33) would have the roots you mention. If you need the polynomial for evaluating it at certain points, this form should be enough. If you need all of its coefficients, your best bet is probably using a polynomial library to multiply the binomials.

Related

Checking if a Rational Function Simplifies to a Polynomial in Matlab

Is there a way to check if a rational function is a polynomial in Matlab?
I have a big rational function, call it R, that I am trying to show is a polynomial. I've tried the simplify and simplifyFraction functions and the following (not very effective) procedure:
Split it into denominator and numerator:
[num,den] = numden(R);
Calculate the roots of both polynomials:
r_num = roots(sym2poly(num));
r_den = roots(sym2poly(den));
Check if all the elements of r_den belong to r_num:
Because of numerical imprecision I haven't been able to come up with a reliable way of doing this.
This is a not-so-easy problem and finding greatest common divisor of polynomials is a very active area of research. There are tons of publications and you can find them online.
The main problem is that root finding is an ill-conditioned problem. And recently a few experts are trying to combine the numerical computations with symbolic representations. If you google for ERES method you will have an entry point together with thesis of Christou.
This problem is particularly important for signals and control people because of the transfer function representations and pole zero cancellations. Matlab goes out a long way to make sure that all is OK and a minimal neighborhood of each pole zero is accepted as a cancellation.
So as a quick remedy, convert your polynomial coefficients to 1D vectors, say a and b, and use minreal(tf(a,b)). Then you can extract num and den of that transfer representation.
Shameless plug: I am the author of a python3 library and I also implemented a system theoretical approach. Here and here is the full implementation details with citations about LCM and GCD operations.

Is mldivide always the same as OLS in MATLAB?

I am doing a comparison of some alternate linear regression techniques.
Clearly these will be bench-marked relative to OLS (Ordinary Least Squares).
But I just want a pure OLS method, no preconditioning of the data to uncover ill-conditioning in the data as you find when you use regress().
I had hoped to simply use the classic (XX)^-1XY expression? However this would necessitate using the inv() function, but in the MATLAB guide page for inv() it recommends that you use mldivide when doing least squares estimation as it is superior in terms of execution time and numerical accuracy.
However, I'm concerned as to whether it's okay to use mldivide to find the OLS estimates? As an operator it seems I can't see what the function is doing by "stepping-in" in the debugger.
Can I be assume that mldivide will produce the same answers as theoretical OLS under all conditions (including in the presence of) singular/i-ll conditioned matrices)?
If not what is the best way to compute pure OLS answers in MATLAB without any preconditioning of the data?
The short answer is:
When the system A*x = b is overdetermined, both algorithms provide the same answer. When the system is underdetermined, PINV will return the solution x, that has the minimum norm (min NORM(x)). MLDIVIDE will pick the solution with least number of non-zero elements.
As for how mldivide works, MathWorks also posted a description of how the function operates.
However, you might also want to have a look at this answer for the first part of the discussion about mldivide vs. other methods when the matrix A is square.
Depending on the shape and composition of the matrix you would use either Cholesky decomposition for symmetric positive definite, LU decomposition for other square matrix or QR otherwise. Then you can can hold onto the factorization and use linsolve to essentially just do back-substitution for you.
As to whether mldivide is preferable to pinv when A is either not square (overspecified) or is square but singular, the two options will give you two of the infinitely many solutions. According to those docs, both solutions will give you exact solutions:
Both of these are exact solutions in the sense that norm(A*x-b) and norm(A*y-b)are on the order of roundoff error.
According to the help page pinv gives a least squares solution to a system of equations, and so to solve the system Ax=b, just do x=pinv(A)*b.

Explanation of two integral equations and implementation

I have a problem with these two equations showing in the pictures.
I have two vectors represented the C(m) and S(m) in the two equations. I am trying to implement these equations in Matlab. Instead of doing a continuous integral operation, I think I should do the summation. For example, the first equation
A1 = sqrt(sum(C.^2));
Am I right? Also, I am not sure how to implement equation two that contains a ||dM||. Please help.
What are the mathematical meaning of these two equations? I think the first one may related to the 'sum of squares', if C(m) is a vector then this equation will measure the total variance of the random variable in vector C or some kind of average of vector C? What about the second one?
Thanks very much for your help!
A.
In MATLAB there are typically two different ways to do an integration.
For people who have access to the symbolic toolbox, algebraic integration is an option. If this is the case for you, I would look into help int and see which inputs you need.
For the rest, numerical integration is available, this basically means that you just calculate a function on a lot of points and then take the mean of the function values in these points.
For the mathematical meaning some more context would be helpful, and you may want to ask this question at math.stackexchange.com or on the site of whatever field you are in. (stats, physics?)

maximum of a polynomial

I have a polynomial of order N (where N is even). This polynomial is equal to minus infinity for x minus/plus infinity (thus it has a maximum). What I am doing right now is taking the derivative of the polynomial by using polyder then finding the roots of the N-1 th order polynomial by using the roots function in Matlab which returns N-1 solutions. Then I am picking the real root that really maximizes the polynomial. The problem is that I am updating my polynomial a lot and at each time step I am using the above procedure to find the maximizer. Therefore, the roots function takes too much of a computation time making my application slow. Is there a way either in Matlab or a proposed algorithm that does this maximization in a computationally efficient fashion( i.e. just finding one solution instead of N-1 solutions)? Thanks.
Edit: I would also like to know whether there is a routine in Matlab that only returns the real roots instead of
roots which returns all real/complex ones.
I think that you are probably out of luck. If the coefficients of the polynomial change at every time step in an arbitrary fashion, then ultimately you are faced with a distinct and unrelated optimisation problem at every stage. There is insufficient information available to consider calculating just a subset of roots of the derivative polynomial - how could you know which derivative root provides the maximum stationary point of the polynomial without comparing the function value at ALL of the derivative roots?? If your polynomial coefficients were being perturbed at each step by only a (bounded) small amount or in a predictable manner, then it is conceivable that you would be able to try something iterative to refine the solution at each step (for example something crude such as using your previous roots as starting point of a new set of newton iterations to identify the updated derivative roots), but the question does not suggest that this is in fact the case so I am just guessing. I could be completely wrong here but you might just be out of luck in getting something faster unless you can provide more information of have some kind of relationship between the polynomials generated at each step.
There is a file exchange submission by Steve Morris which finds all real roots of functions on a given interval. It does so by interpolating the polynomial by a Chebychev polynomial, and finding its roots.
You can modify the eig evaluation of the companion matrix in there, to eigs. This allows you to find only one (or a few) roots and save time (there's a fair chance it's also possible to compute the roots or extrema of a Chebychev analytically, although I could not find a good reference for that (or even a bad one for that matter...)).
Another attempt that you can make in speeding things up, is to note that polyder does nothing more than
Pprime = (numel(P)-1:-1:1) .* P(1:end-1);
for your polynomial P. Also, roots does nothing more than find the eigenvalues of the companion matrix, so you could find these eigenvalues yourself, which prevents a call to roots. This could both be beneficial, because calls to non-builtin functions inside a loop prevent Matlab's JIT compiler from translating the loop to machine language. This could otherwise give you a large speed gain (factors of 100 or more are not uncommon).

Linear least-squares fit with constraint - any ideas?

I have a problem where I am fitting a high-order polynomial to (not very) noisy data using linear least squares. Currently I'm using polynomial orders around 15 - 25, which work surprisingly well: The dependence is very nearly linear, but the accuracy of modelling the 'very nearly' is critical. I'm using Matlab's polyfit() function, and (obviously) normalising the x-data. This generally works fine, but I have come across an issue with some recent datasets. The fitted polynomial has extrema within the x-data interval. For the application I'm working on this is a non-no. The polynomial model must have no stationary points over the x-interval.
So I need to add a constraint to the least-squares problem: the derivative of the fitted polynomial must be strictly positive over a known x-range (or strictly negative - this depends on the data but a simple linear fit will quickly tell me which it is.) I have had a quick look at the available optimisation toolbox functions, but I admit I'm at a loss to know how to go about this. Does anyone have any suggestions?
[I appreciate there are probably better models than polynomials for this data, but in the short term it isn't feasible to change the form of the model]
[A closing note: I have finally got the go-ahead to replace this awful polynomial model! I am going to adopt a nonparametric approach, spline smoothing, using the excellent SPLINEFIT code by Jonas Lundgren. This has the advantage that I'm already using a spline model in the end-user application, so I already have C# code available to evaluate a spline model]
You could use cftool and use the exclude data points option.