I'm not familiar with expert math. so I don't know where to start from.
I have get a some article like this. I am just following this article description. But this is not easy to me.
But I'm not sure how to make just one polynomial equation(or something like that) from above 4 polynomial equations. Is this can be possible way?
If yes, Would you please help me how to get a polynomial(or something like equation)? If not, would you let me know the reason of why?
UPDATE
I'd like to try as following
clear all ;
clc
ab = (H' * H)\H' * y;
y2 = H*ab;
Finally I can get some numbers like this.
So, is this meaning?
As you can see the red curve line, something wrong.
What did I miss anythings?
All the article says is "you can combine multiple data sets into one to get a single polynomial".
You can also go in the other direction: subdivide your data set into pieces and get as many separate ones as you wish. (This is called n-fold validation.)
You start with a collection of n points (x, y). (Keep it simple by having only one independent variable x and one dependent variable y.)
Your first step should be to plot the data, look at it, and think about what kind of relationship between the two would explain it well.
Your next step is to assume some form for the relationship between the two. People like polynomials because they're easy to understand and work with, but other, more complex relationships are possible.
One polynomial might be:
y = c0 + c1*x + c2*x^2 + c3*x^3
This is your general relationship between the dependent variable y and the independent variable x.
You have n points (x, y). Your function can't go through every point. In the example I gave there are only four coefficients. How do you calculate the coefficients for n >> 4?
That's where the matricies come in. You have n equations:
y(1) = c0 + c1*x(1) + c2*x(1)^2 + c3*x(1)^3
....
y(n) = c0 + c1*x(n) + c2*x(n)^2 + c3*x(n)^3
You can write these as a matrix:
y = H * c
where the prime denotes "transpose".
Premultiply both sides by transpose(X):
transpose(X)* y = transpose(H)* H * c
Do a standard matrix inversion or LU decomposition to solve for the unknown vector of coefficients c. These particular coefficients minimize the sum of squares of differences between the function evaluated at each point x and your actual value y.
Update:
I don't know where this fixation with those polynomials comes from.
Your y vector? Wrong. Your H matrix? Wrong again.
If you must insist on using those polynomials, here's what I'd recommend: You have a range of x values in your plot. Let's say you have 100 x values, equally spaced between 0 and your max value. Those are the values to plug into your H matrix.
Use the polynomials to synthesize sets of y values, one for each polynomial.
Combine all of them into a single large problem and solve for a new set of coefficients. If you want a 3rd order polynomial, you'll only have four coefficients and one equation. It'll represent the least squares best approximation of all the synthesized data you created with your four polynomials.
Related
I have to vectors of data points (Gene expression in Tissue A and B) and I want to see, if their is any systematic bias along its magnitude (same expression of Gene X in A and B).
The idea was to build a simple regression model in stan and see how much the posterior for the slope (beta) overlaps with 1.
model {
for (n in 1:N){
y[n] ~ normal(alpha[i[n]] + beta[i[n]] * x[n], sigma[i[n]]);
}
}
However, depending on which vector is x and which is y, I get different results, where one slope is about 1 and other not (see Image, where x and y a swapped and the colored lines represents the regressions I get from the model (gray is slope 1)). As I found out, this a typical thing for regression methods like ordinary least squares, which makes sense if one value is dependent on the other. However, here there is no dependency and both vectors are "equal".
Now the question is, what would be an appropriate model to perform a symmetrical regression in stan.
Following the suggestion from LukasNeugebauer by standardizing the data first and working without an intercept, does not solve the problem.
I cheated a bit and found a solution:
When you rotate the coordinate system by 45 degrees, the new y-Axis (y') represents the information of x and y in equal amounts. Therefor, assuming a variance only on the new y-Axis involves both x and y.
x' = x*cos((pi/180)*45) + y*sin((pi/180)*45)
y' = -x*sin((pi/180)*45) + y*cos((pi/180)*45)
The above model now results in symmetric results. Where a slope of 0, represents a slope of 1 in the old system.
I want to fit a curve to my data points (x;y) that will have a formula as such:
1/y = (x^-1)*a + b
At first I wanna do this using Octave but later I have to code this into microcontroller using c.
A quick search on google and matlab documentation don't give an anwesr I can't find a function that do polyfit with elements with negative order.
Is there a special set of function for such operation or do I have to somehow transfer my formula to fit into standard math problem ?
Your unknowns are aand b which are both linear in your problem. So you can use the 1st order polynomial fitting. It is already in the form of a standard math problem. To see just rename
Y = a*X + b
with the known data vectors (or points)
Y = 1/y
X = 1/x
Thats all.
I have a set of points describing a closed curve in the complex plane, call it Z = [z_1, ..., z_N]. I'd like to interpolate this curve, and since it's periodic, trigonometric interpolation seemed a natural choice (especially because of its increased accuracy). By performing the FFT, we obtain the Fourier coefficients:
F = fft(Z);
At this point, we could get Z back by the formula (where 1i is the imaginary unit, and we use (k-1)*(n-1) because MATLAB indexing starts at 1)
N
Z(n) = (1/N) sum F(k)*exp( 1i*2*pi*(k-1)*(n-1)/N), 1 <= n <= N.
k=1
My question
Is there any reason why n must be an integer? Presumably, if we treat n as any real number between 1 and N, we will just get more points on the interpolated curve. Is this true? For example, if we wanted to double the number of points, could we not set
N
Z_new(n) = (1/N) sum F(k)*exp( 1i*2*pi*(k-1)*(n-1)/N), with n = 1, 1.5, 2, 2.5, ..., N-1, N-0.5, N
k=1
?
The new points are of course just subject to some interpolation error, but they'll be fairly accurate, right? The reason I'm asking this question is because this method is not working for me. When I try to do this, I get a garbled mess of points that makes no sense.
(By the way, I know that I could use the interpft() command, but I'd like to add points only in certain areas of the curve, for example between z_a and z_b)
The point is when n is integer, you have some primary functions which are orthogonal and can be as a basis for the space. When, n is not integer, The exponential functions in the formula, are not orthogonal. Hence, the expression of a function based on these non-orthogonal basis is not meaningful as much as you expected.
For orthogonality case you can see the following as an example (from here). As you can check, you can find two n_1 and n_2 which are not integer, the following integrals are not zero any more, and they are not orthogonal.
I have the following equation:
I want to do a exponential curve fitting using MATLAB for the above equation, where y = f(u,a). y is my output while (u,a) are my inputs. I want to find the coefficients A,B for a set of provided data.
I know how to do this for simple polynomials by defining states. As an example, if states= (ones(size(u)), u u.^2), this will give me L+Mu+Nu^2, with L, M and N being regression coefficients.
However, this is not the case for the above equation. How could I do this in MATLAB?
Building on what #eigenchris said, simply take the natural logarithm (log in MATLAB) of both sides of the equation. If we do this, we would in fact be linearizing the equation in log space. In other words, given your original equation:
We get:
However, this isn't exactly polynomial regression. This is more of a least squares fitting of your points. Specifically, what you would do is given a set of y and set pair of (u,a) points, you would build a system of equations and solve for this system via least squares. In other words, given the set y = (y_0, y_1, y_2,...y_N), and (u,a) = ((u_0, a_0), (u_1, a_1), ..., (u_N, a_N)), where N is the number of points that you have, you would build your system of equations like so:
This can be written in matrix form:
To solve for A and B, you simply need to find the least-squares solution. You can see that it's in the form of:
Y = AX
To solve for X, we use what is called the pseudoinverse. As such:
X = A^{*} * Y
A^{*} is the pseudoinverse. This can eloquently be done in MATLAB using the \ or mldivide operator. All you have to do is build a vector of y values with the log taken, as well as building the matrix of u and a values. Therefore, if your points (u,a) are stored in U and A respectively, as well as the values of y stored in Y, you would simply do this:
x = [u.^2 a.^3] \ log(y);
x(1) will contain the coefficient for A, while x(2) will contain the coefficient for B. As A. Donda has noted in his answer (which I embarrassingly forgot about), the values of A and B are obtained assuming that the errors with respect to the exact curve you are trying to fit to are normally (Gaussian) distributed with a constant variance. The errors also need to be additive. If this is not the case, then your parameters achieved may not represent the best fit possible.
See this Wikipedia page for more details on what assumptions least-squares fitting takes:
http://en.wikipedia.org/wiki/Least_squares#Least_squares.2C_regression_analysis_and_statistics
One approach is to use a linear regression of log(y) with respect to u² and a³:
Assuming that u, a, and y are column vectors of the same length:
AB = [u .^ 2, a .^ 3] \ log(y)
After this, AB(1) is the fit value for A and AB(2) is the fit value for B. The computation uses Matlab's mldivide operator; an alternative would be to use the pseudo-inverse.
The fit values found this way are Maximum Likelihood estimates of the parameters under the assumption that deviations from the exact equation are constant-variance normally distributed errors additive to A u² + B a³. If the actual source of deviations differs from this, these estimates may not be optimal.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm going to write a program where the input is a data set of 2D points and the output is the regression coefficients of the line of best fit by minimizing the minimum MSE error.
I have some sample points that I would like to process:
X Y
1.00 1.00
2.00 2.00
3.00 1.30
4.00 3.75
5.00 2.25
How would I do this in MATLAB?
Specifically, I need to get the following formula:
y = A + Bx + e
A is the intercept and B is the slope while e is the residual error per point.
Judging from the link you provided, and my understanding of your problem, you want to calculate the line of best fit for a set of data points. You also want to do this from first principles. This will require some basic Calculus as well as some linear algebra for solving a 2 x 2 system of equations. If you recall from linear regression theory, we wish to find the best slope m and intercept b such that for a set of points ([x_1,y_1], [x_2,y_2], ..., [x_n,y_n]) (that is, we have n data points), we want to minimize the sum of squared residuals between this line and the data points.
In other words, we wish to minimize the cost function F(m,b,x,y):
m and b are our slope and intercept for this best fit line, while x and y are a vector of x and y co-ordinates that form our data set.
This function is convex, so there is an optimal minimum that we can determine. The minimum can be determined by finding the derivative with respect to each parameter, and setting these equal to 0. We then solve for m and b. The intuition behind this is that we are simultaneously finding m and b such that the cost function is jointly minimized by these two parameters. In other words:
OK, so let's find the first quantity :
We can drop the factor 2 from the derivative as the other side of the equation is equal to 0, and we can also do some distribution of terms by multiplying the -x_i term throughout:
Next, let's tackle the next parameter :
We can again drop the factor of 2 and distribute the -1 throughout the expression:
Knowing that is simply n, we can simplify the above to:
Now, we need to simultaneously solve for m and b with the above two equations. This will jointly minimize the cost function which finds the best line of fit for our data points.
Doing some re-arranging, we can isolate m and b on one side of the equations and the rest on the other sides:
As you can see, we can formulate this into a 2 x 2 system of equations to solve for m and b. Specifically, let's re-arrange the two equations above so that it's in matrix form:
With regards to above, we can decompose the problem by solving a linear system: Ax = b. All you have to do is solve for x, which is x = A^{-1}*b. To find the inverse of a 2 x 2 system, given the matrix:
The inverse is simply:
Therefore, by substituting our quantities into the above equation, we solve for m and b in matrix form, and it simplifies to this:
Carrying out this multiplication and solving for m and b individually, this gives:
As such, to find the best slope and intercept to best fit your data, you need to calculate m and b using the above equations.
Given your data specified in the link in your comments, we can do this quite easily:
%// Define points
X = 1:5;
Y = [1 2 1.3 3.75 2.25];
%// Get total number of points
n = numel(X);
% // Define relevant quantities for finding quantities
sumxi = sum(X);
sumyi = sum(Y);
sumxiyi = sum(X.*Y);
sumxi2 = sum(X.^2);
sumyi2 = sum(Y.^2);
%// Determine slope and intercept
m = (sumxi * sumyi - n*sumxiyi) / (sumxi^2 - n*sumxi2);
b = (sumxiyi * sumxi - sumyi * sumxi2) / (sumxi^2 - n*sumxi2);
%// Display them
disp([m b])
... and we get:
0.4250 0.7850
Therefore, the line of best fit that minimizes the error is:
y = 0.4250*x + 0.7850
However, if you want to use built-in MATLAB tools, you can use polyfit (credit goes to Luis Mendo for providing the hint). polyfit determines the line (or nth order polynomial curve rather...) of best fit by linear regression by minimizing the sum of squared errors between the best fit line and your data points. How you call the function is so:
coeff = polyfit(x,y,order);
x and y are the x and y points of your data while order determines the order of the line of best fit you want. As an example, order=1 means that the line is linear, order=2 means that the line is quadratic and so on. Essentially, polyfit fits a polynomial of order order given your data points. Given your problem, order=1. As such, given the data in the link, you would simply do:
X = 1:5;
Y = [1 2 1.3 3.75 2.25];
coeff = polyfit(X,Y,1)
coeff =
0.4250 0.7850
The way coeff works is that these are the coefficients of the regression line, starting from the highest order in decreasing value. As such, the above coeff variable means that the regression line was fitted as:
y = 0.4250*x + 0.7850
The first coefficient is the slope while the second coefficient is the intercept. You'll also see that this matches up with the link you provided.
If you want a visual representation, here's a plot of the data points as well as the regression line that best fits these points:
plot(X, Y, 'r.', X, polyval(coeff, X));
Here's the plot:
polyval takes an array of coefficients (usually produced by polyfit), and you provide a set of x co-ordinates and it calculates what the y values are given the values of x. Essentially, you are evaluating what the points are along the best fit line.
Edit - Extending to higher orders
If you want to extend so that you're finding the best fit for any nth order polynomial, I won't go into the details, but it boils down to constructing the following linear system. Given the relationship for the ith point between (x_i, y_i):
You would construct the following linear system:
Basically, you would create a vector of points y, and you would construct a matrix X such that each column denotes taking your vector of points x and applying a power operation to each column. Specifically, the first column is the zero-th power, the first column is the first power, the second column is the second power and so on. You would do this up until m, which is the order polynomial you want. The vector of e would be the residual error for each point in your set.
Specifically, the formulation of the problem can be written in matrix form as:
Once you construct this matrix, you would find the parameters by least-squares by calculating the pseudo-inverse. How the pseudo-inverse is derived, you can read it up on the Wikipedia article I linked to, but this is the basis for minimizing a system by least-squares. The pseudo-inverse is the backbone behind least-squares minimization. Specifically:
(X^{T}*X)^{-1}*X^{T} is the pseudo-inverse. X itself is a very popular matrix, which is known as the Vandermonde matrix and MATLAB has a command called vander to help you compute that matrix. A small note is that vander in MATLAB is returned in reverse order. The powers decrease from m-1 down to 0. If you want to have this reversed, you'd need to call fliplr on that output matrix. Also, you will need to append one more column at the end of it, which is the vector with all of its elements raised to the mth power.
I won't go into how you'd repeat your example for anything higher order than linear. I'm going to leave that to you as a learning exercise, but simply construct the vector y, the matrix X with vander, then find the parameters by applying the pseudo-inverse of X with the above to solve for your parameters.
Good luck!