Explaining the terms of a 2nd degree polynomial equation - polynomial-math

This is a pretty basic question, but I have never thought about polynomials in this way before. I want to compare different polynomials of the form C0 + C1x + C2x^2 that I have generated from raw data. I am not a mathematician by degree, so I have never had to explain the theory behind a polynomial before. As an example, if only 1 of 5 polynomial equations I have generated has a negative value for c1 (the raw data are of the same type but from 5 different sources), how could I explain this? Is C1 more heavily affected by the mean of all of the data, or does the total range affect it more etc. I want a way to explain what each term (C0, C1, C2) is most affected by. Thank you.

I've also never thought about polynomials in this way before. There might be better explanations for specific formulas. But I dont think, there is a general explanation. Might this help?
f(x) = C0 + C1*x + C2*x*x
C0: The constant term. f(0) = C0
C1: The linear term. f(x) is approximatly C0+C1*x for small x
C2: The quadratic term. f(x) is approximatly C2*x*x for large x

Related

Fit a quadratic function of two variables (Practitioner Black Scholes Deterministic Volatility Functions)

I am attempting to fit the parameters of a deterministic volatility function for use in the practitioner Black Scholes model.
The formula for which I want to estimate the "a" parameters is:
sig = a0 + a1*K + a2*K^2 + a3*T + a4*T^2 + a5*KT
Where sig, K and T are known; I have multiple observations of K, T and sig combinations but only want a single set of "a" parameters.
How might I go about this? My google searches and own attempts all failed, unfortunately.
Thank you!
The function lsqcurvefit allows you to define the function that you want to fit. It should be straight forward from there on.
http://se.mathworks.com/help/optim/ug/lsqcurvefit.html
Some Mathematics
Notation stuff: index your observations by i and add an error term.
sig_i = a0 + a1*K_i + a2*K_i^2 + a3*T_i + a4*T_i^2 + a5*KT_i + e_i
Something probably not insane to do would be to minimize the square of the error term:
minimize (over a) \sum_i e_i^2
The solution to least squares is a simple linear algebra problem. (See https://stats.stackexchange.com/questions/186196/understanding-linear-algebra-in-ordinary-least-squares-derivation/186289#186289 for a solution if you really care.) (Further note: e_i is a linear function of a. I'm not sure why you would need lsqcurvefit as another answer suggested?)
Matlab Code for OLS (Ordinary Least Squares)
Assuming sig, K, T, and KT are n by 1 vectors
y = sig;
X = [ones(length(sig),1), K, K.^2, T, T.^2, KT];
a = X \ y; %basically computes a = inv(X'*X)*(X'*y) but in a better way
This an ordinary least squares regression of y on X.
Further Ideas
Depending on the distribution of your error terms, correlated error etc... regular OLS may be inefficient or possibly even inappropriate... I'm not familiar with the details of this problem to know. You may want to check what people do.
Eg. a technique that's less sensitive to big outliers is to minimize the absolute value of the error.
minimize (over a) \sum_i |a_i|
If you have a good, statistical model of how the data is generated you could do maximum likelihood estimation. Anyway... this rapidly devolve into a multi-quarter, statistics class.

Solving a pair of real (and interwined) equations

I have a first-order difference equation: y[n] = z0 * [n-1] + x[n] (2-3). Usually what we would do is apply the z-transform, then use the "filter" function. But my teacher wants to do it differently:
In the first-order difference equation (2-3), let yR[n] and yI[n]
denote the real and imaginary parts of y[n]. Write a pair of
real-valued difference equations expressing yR[n] and yI[n] in terms
of yR[n-1], yI[n-1], x[n], and r, cos m, and sin m
(I forgot to mention, x[n]=G*dirac[n] where G is a complex constant, which is where r, cos m and sin m came from).
Here is my result (this is the best I could think of):
yR[n]=r(yR[n-1]cosm - yI[n-1]sinm) + xR[n]
yI[n]=r(yI[n-1]cosm + yR[n-1]sinm) + xI[n]
Then the next question is:
Write a MATLAB program to implement this pair of real equations, and
use this prorgam to generate the impulse response of equation (2-3)
for r=1/2 and m=0, and m=pi/4. For these 2 cases, plot the real part
of the impulse responses obtained. Compare to the real part of the
output from the complex recursion (2-3)
What I don't understand is just how i can do this besides applying z-transform and then use the "filter" function. I have looked up on the web, and there was something about the state-space form, but I don't know if it's relevant or not. Also I'm not looking to have the solution handed to me on a silver platter, I just want to know how to work it out. Thank you very much!
You are on the right track. For a digital system, such as you have, you simply set the initial input, and run the program. There is no need to do anything fancy, you are very much overthinking the problem. In other words, for a simple function, you could do this:
f(0)=1;
f(n)=a*f(n-1);
Essentially for this you would loop for some range (Maybe 20 points), where f(n) looks at the previous function.
For your function, I suspect you simply set the real part (yR[0]) to 1, yI[0]=0, and run it for a while.
I know Matlab is 1 based, so you probably want to actually set the first value to 1, but the same principal applies.

How can I make all-in-one polynomial from multi-polynomial?

I'm not familiar with expert math. so I don't know where to start from.
I have get a some article like this. I am just following this article description. But this is not easy to me.
But I'm not sure how to make just one polynomial equation(or something like that) from above 4 polynomial equations. Is this can be possible way?
If yes, Would you please help me how to get a polynomial(or something like equation)? If not, would you let me know the reason of why?
UPDATE
I'd like to try as following
clear all ;
clc
ab = (H' * H)\H' * y;
y2 = H*ab;
Finally I can get some numbers like this.
So, is this meaning?
As you can see the red curve line, something wrong.
What did I miss anythings?
All the article says is "you can combine multiple data sets into one to get a single polynomial".
You can also go in the other direction: subdivide your data set into pieces and get as many separate ones as you wish. (This is called n-fold validation.)
You start with a collection of n points (x, y). (Keep it simple by having only one independent variable x and one dependent variable y.)
Your first step should be to plot the data, look at it, and think about what kind of relationship between the two would explain it well.
Your next step is to assume some form for the relationship between the two. People like polynomials because they're easy to understand and work with, but other, more complex relationships are possible.
One polynomial might be:
y = c0 + c1*x + c2*x^2 + c3*x^3
This is your general relationship between the dependent variable y and the independent variable x.
You have n points (x, y). Your function can't go through every point. In the example I gave there are only four coefficients. How do you calculate the coefficients for n >> 4?
That's where the matricies come in. You have n equations:
y(1) = c0 + c1*x(1) + c2*x(1)^2 + c3*x(1)^3
....
y(n) = c0 + c1*x(n) + c2*x(n)^2 + c3*x(n)^3
You can write these as a matrix:
y = H * c
where the prime denotes "transpose".
Premultiply both sides by transpose(X):
transpose(X)* y = transpose(H)* H * c
Do a standard matrix inversion or LU decomposition to solve for the unknown vector of coefficients c. These particular coefficients minimize the sum of squares of differences between the function evaluated at each point x and your actual value y.
Update:
I don't know where this fixation with those polynomials comes from.
Your y vector? Wrong. Your H matrix? Wrong again.
If you must insist on using those polynomials, here's what I'd recommend: You have a range of x values in your plot. Let's say you have 100 x values, equally spaced between 0 and your max value. Those are the values to plug into your H matrix.
Use the polynomials to synthesize sets of y values, one for each polynomial.
Combine all of them into a single large problem and solve for a new set of coefficients. If you want a 3rd order polynomial, you'll only have four coefficients and one equation. It'll represent the least squares best approximation of all the synthesized data you created with your four polynomials.

How to accurately calibrate a measurement using a higher order correlation?

I have about 1000 measurements using a device. Let's call these measurement y. For each of these measurements, I know what the actual measurement should be, let's call these z. How I can calibrate, adjust, or scale y for a better estimation? I was thinking of solving either of the following systems of equations (linear/nonlinear) for alpha, beta, and gamma:
or
Could someone give me some advice and let me know if I am doing this correctly?
First you need to know that a measurement device is doing two kinds of errors: accidental and systematic.
The accidental errors are due to a number of perturbation factors with a complex interaction and will result in non repeatability (measuring twice the same value results in different measurements). To reduce the accidental errors, you can repeat the measurement and average.
The systematic errors are permanent and stable. They are due to the relation z = y being wrong or approximate, and will repeat identically for the same measurement. The true relation can be of the form y = z + c with c != 0 (offset error), y = c.z with c != 1 (gain error), y = c1.z + c2 (both), or nonlinear, like y = c1.z² + c2.z + c3, y = (c1.z + c2) / (c3.z + c4), y = ln(exp(z)+1)... or any other.
In some cases, you have reasons to know the functional form of the relation (for instance a metallic ruler gets a wrong "gain" when the temperature changes); in other cases you don't, and you can use an empirical model such as a polynomial (quite often, the relation is smooth and remains close to y = z).
Usually, observing a plot of the (z, y) points will hint you the importance of accidental errors and the possible shape of the functional relation.
A simple approach is to try a least-squares fitting of a polynomial model (say second or third degree). Then when you have found the coefficients, you can look at the relative magnitudes of the polynomial terms (powers) over the working range. This will tell you if all terms are relevant. I advise you to discard the terms that do not significantly decrease the fitting error and keep a simple model.
Consider the case of the plot below, chosen randomly from the web.
At first sight the relation looks linear, with no offset error (as the relation includes the point (0, 0)), and a few irregularities, that we can attribute to accidental errors. For this device, the straight model y = c.z should be appropriate, and adding nonlinear terms would be useless or misleading.

What is the fastest method for solving exp(ax)-ax+c=0 for x in matlab

What is the least computational time consuming way to solve in Matlab the equation:
exp(ax)-ax+c=0
where a and c are constants and x is the value I'm trying to find?
Currently I am using the in built solver function, and I know the solution is single valued, but it is just taking longer than I would like.
Just wanting something to run more quickly is insufficient for that to happen.
And, sorry, but if fzero is not fast enough then you won't do much better for a general root finding tool.
If you aren't using fzero, then why not? After all, that IS the built-in solver you did not name. (BE EXPLICIT! Otherwise we must guess.) Perhaps you are using solve, from the symbolic toolbox. It will be more slow, since it is a symbolic tool.
Having said the above, I might point out that you might be able to improve by recognizing that this is really a problem with a single parameter, c. That is, transform the problem to solving
exp(y) - y + c = 0
where
y = ax
Once you know the value of y, divide by a to get x.
Of course, this way of looking at the problem makes it obvious that you have made an incorrect statement, that the solution is single valued. There are TWO solutions for any negative value of c less than -1. When c = -1, the solution is unique, and for c greater than -1, no solutions exist in real numbers. (If you allow complex results, then there will be solutions there too.)
So if you MUST solve the above problem frequently and fzero was inadequate, then I would consider a spline model, where I had precomputed solutions to the problem for a sufficient number of distinct values of c. Interpolate that spline model to get a predicted value of y for any c.
If I needed more accuracy, I might take a single Newton step from that point.
In the event that you can use the Lambert W function, then solve actually does give us a solution for the general problem. (As you see, I am just guessing what you are trying to solve this with, and what are your goals. Explicit questions help the person trying to help you.)
solve('exp(y) - y + c')
ans =
c - lambertw(0, -exp(c))
The zero first argument to lambertw yields the negative solution. In fact, we can use lambertw to give us both the positive and negative real solutions for any c no larger than -1.
X = #(c) c - lambertw([0 -1],-exp(c));
X(-1.1)
ans =
-0.48318 0.41622
X(-2)
ans =
-1.8414 1.1462
Solving your system symbolically
syms a c x;
fx0 = solve(exp(a*x)-a*x+c==0,x)
which results in
fx0 =
(c - lambertw(0, -exp(c)))/a
As #woodchips pointed out, the Lambert W function has two primary branches, W0 and W−1. The solution given is with respect to the upper (or principal) branch, denoted W0, your equation actually has an infinite number of complex solutions for Wk (the W0 and W−1 solutions are real if c is in [−∞, 0]). In Matlab, lambertw is only implemented for symbolic inputs and thus is very slow method of solving your equation if you're interested in numerical (double precision) solutions.
If you wish to solve such equations numerically in an efficient manner, you might look at Corless, et al. 1996. But, as long as your parameter c is in [−∞, 0], i.e., -exp(c) in [−1/e, 0] and you're interested in the W0 branch, you can use the Matlab code that I wrote to answer a similar question at Math.StackExchange. This code should be much much more efficient that using a naïve approach with fzero.
If your values of c are not in [−∞, 0] or you want the solution corresponding to a different branch, then your solution may be complex-valued and you won't be able to use the simple code I linked to above. In that case, you can more fully implement the function by reading the Corless, et al. 1996 paper or you can try converting the Lambert W to a Wright ω function: W0(z) = ω(log(z)), W−1(z) = ω(log(z)−2πi). In your case, using Matlab's wrightOmega, the W0 branch corresponds to:
fx0 =
(c - wrightOmega(log(-exp(c))))/a
and the W−1 branch to:
fxm1 =
(c - wrightOmega(log(-exp(c))-2*sym(pi)*1i))/a
If c is real, then the above reduces to
fx0 =
(c - wrightOmega(c+sym(pi)*1i))/a
and
fxm1 =
(c - wrightOmega(c-sym(pi)*1i))/a
Matlab's wrightOmega function is also symbolic only, but I have written a double precision implementation (based on Lawrence, et al. 2012) that you can find on my GitHub here and that is 3+ orders of magnitude faster than evaluating the function symbolically. As your problem is technically in terms of a Lambert W, it may be more efficient, and possibly more numerically accurate, to implement that more complicated function for the regime of interest (this is due to the log transformation and the extra evaluation of a complex log). But feel free to test.