I must solve the system Ax = b using SOR method. omega is 1.2, tolerance is 10^-4 and the iteration starts with 0,0,0
A = [7,-1,-4;1,5,3;2,-1,4]
b = [4,4,3]'
A is diagonally dominant so I expect the solution to converge. However, the result I get is rather strange.
x = 4.633748588560370e+67
y = -5.235501743394227e+67
z = -5.247746350370127e+67
I used the algorithm provided here. What am I missing?
If A was symmetric positive definite then omega = 1.2 would be guaranteed to converge. Try something like omega = 0.8.
Related
I am trying to calculate the torque an ellipsoid gets in a gravity field. In theory, the torque the ellipsoid gets should become zero as alpha goes to zero. However, MATLAB returned a nonzero value for the following integral:
e = 1/1737100/10^2;
D = 384399000;
G = 6.6743*10^(-11);
R = 1737100;
M = 5.97237*10^24;
alpha = 0;
dT = #(r,theta,phi) ((1+e).*sin(theta).*sin(phi).*sin(alpha)+cos(theta).*cos(alpha)).*r.^3.*sin(theta)./(r.^2+D.^2-2.*r.*D.*((1+e).*sin(theta).*sin(phi).*cos(alpha)-cos(theta)*sin(alpha))).^(3./2);
Torque = G.*M.*D.*(1+e).*integral3(dT,0,R,0,pi,0,2*pi)
I guess this is because that integral3 just inputs a matrix into the equation to get the numerical solution. So I am wondering, is there any numerical method which we can use to get a more accurate solution than integral3? Ideally computational time shouldn't be too high and of course, if an analytical solution exists, I'd be happy to hear about that.
Data x is input to an autoregreesive model (AR) model. The output of the AR model is corrupted with Additive White Gaussian Noise at SNR = 30 dB. The observations are denoted by noisy_y.
Let there be close estimates h_hat of the AR model (these are obtained from Least Squares estimation). I want to see how close the input obtained from deconvolution with h_hat and the measurements is to the known x.
My confusion is which variable to use for deconvolution -- clean y or noisy y?
Upon deconvolution, I should get x_hat. I am not sure if the correct way to perform deconvolution is using the noisy_y or using the y before adding noise. I have used the following code.
Can somebody please help in what is the correct method to plot x and x_hat.
Below is the plot of x vs x_hat. As can be seen, that these do not match. Where is my understand wrong? Please help.
The code is:
clear all
N = 200; %number of data points
a1=0.1650;
b1=-0.850;
h = [1 a1 b1]; %true coefficients
x = rand(1,N);
%%AR model
y = filter(1,h,x); %transmitted signal through AR channel
noisy_y = awgn(y,30,'measured');
hat_h= [1 0.133 0.653];
x_hat = filter(hat_h,1,noisy_y); %deconvolution
plot(1:50,x(1:50),'b');
hold on;
plot(1:50,x_hat(1:50),'-.rd');
A first issue is that the coefficients h of your AR model correspond to an unstable system since one of its poles is located outside the unit circle:
>> abs(roots(h))
ans =
1.00814
0.84314
Parameter estimation techniques are then quite likely to fail to converge given a diverging input sequence. Indeed, looking at the stated hat_h = [1 0.133 0.653] it is pretty clear that the parameter estimation did not converge anywhere near the actual coefficients. In your specific case you did not provide the code illustrating how you obtained hat_h (other than specifying that it was "obtained from Least Squares estimation"), so it isn't possible to further comment on what went wrong with your estimation.
That said, the standard formulation of Least Mean Squares (LMS) filters is given for an MA model. A common method for AR parameter estimation is to solve the Yule-Walker equations:
hat_h = aryule(noisy_y - mean(noisy_y), length(h)-1);
If we were to use this estimation method with the stable system defined by:
h = [1 -a1 -b1];
x = rand(1,N);
%%AR model
y = filter(1,h,x); %transmitted signal through AR channel
noisy_y = awgn(y,30,'measured');
hat_h = aryule(noisy_y - mean(noisy_y), length(h)-1);
x_hat = filter(hat_h,1,noisy_y); %deconvolution
The plot of x and x_hat would look like:
I'm trying to solve the following equation for the variable h:
F is the normal cumulative distribution function
f is the probability density function of chi-square distribution.
I want to solve it numerically using Matlab.
First I tried to solve the problem with a simpler version of function F and f.
Then, I tried to solve the problem as below:
n = 3; % n0 in the equation
k = 2;
P = 0.99; % P* in the equation
F = #(x,y,h) normcdf(h./sqrt((n-1)*(1./x+1./y)));
g1 = #(y,h)integral(#(x) F(x,y,h).*chi2pdf(x,n),0,Inf, 'ArrayValued', true);
g2 = #(h) integral(#(y) (g1(y,h).^(k-1)).*chi2pdf(y,n),0,Inf)-P;
bsolve = fzero(g2,0)
I obtained an answer of 5.1496. The problem is that this answer seems wrong.
There is a paper which provides a table of h values obtained by solving the above equation. This paper was published in 1984, when the computer power was not good enough to solve the equation quickly. They solved it with various values:
n0 = 3:20, 30:10:50
k = 2:10
P* = 0.9, 0.95 and 0.99
They provide the value of h in each case.
Now I'm trying to solve this equation and get the value of h with different values of n0, k and P* (for example n0=25, k=12 and P*=0.975), where the paper does not provide the value of h.
The Matlab code above gives me different values of h than the paper.
For example:
n0=3, k=2 and P*=0.99: my code gives h = 5.1496, the paper gives h = 10.276.
n0=10, k=4 and P*=0.9: my code gives h = 2.7262, the paper gives h = 2.912.
The author says
The integrals were evaluated using 32 point Gauss-Laguerre numerical quadrature. This was accomplished by evaluating the inner integral at the appropriate 32 values of y which were available from the IBM subroutine QA32. The inner integral was also evaluated using 32 point Gauss-Laguerre numerical quadrature. Each of the 32 values of the inner integral was raised to the k-1 power and multiplied by the appropriate constant.
Matlab seems to use the same method of quadrature:
q = integral(fun,xmin,xmax) numerically integrates function fun from xmin to xmax using global adaptive quadrature and default error tolerances.
Does any one have any idea which results are correct? Either I have made some mistake(s) in the code, or the paper could be wrong - possibly because the author used only 32 values in estimating the integrals?
This does work, but the chi-squared distribution has (n-1) degrees of freedom, not n as in the code posted. If that's fixed then the Rinott's constant values agree with literature. Or at least, the literature that isn't behind a paywall. Can't speak to the 1984 Wilcox.
I'm trying to find the line which best fits to the data. I use the following code below but now I want to have the data placed into an array sorted so it has the data which is closest to the line first how can I do this? Also is polyfit the correct function to use for this?
x=[1,2,2.5,4,5];
y=[1,-1,-.9,-2,1.5];
n=1;
p = polyfit(x,y,n)
f = polyval(p,x);
plot(x,y,'o',x,f,'-')
PS: I'm using Octave 4.0 which is similar to Matlab
You can first compute the error between the real value y and the predicted value f
err = abs(y-f);
Then sort the error vector
[val, idx] = sort(err);
And use the sorted indexes to have your y values sorted
y2 = y(idx);
Now y2 has the same values as y but the ones closer to the fitting value first.
Do the same for x to compute x2 so you have a correspondence between x2 and y2
x2 = x(idx);
Sembei Norimaki did a good job of explaining your primary question, so I will look at your secondary question = is polyfit the right function?
The best fit line is defined as the line that has a mean error of zero.
If it must be a "line" we could use polyfit, which will fit a polynomial. Of course, a "line" can be defined as first degree polynomial, but first degree polynomials have some properties that make it easy to deal with. The first order polynomial (or linear) equation you are looking for should come in this form:
y = mx + b
where y is your dependent variable and X is your independent variable. So the challenge is this: find the m and b such that the modeled y is as close to the actual y as possible. As it turns out, the error associated with a linear fit is convex, meaning it has one minimum value. In order to calculate this minimum value, it is simplest to combine the bias and the x vectors as follows:
Xcombined = [x.' ones(length(x),1)];
then utilized the normal equation, derived from the minimization of error
beta = inv(Xcombined.'*Xcombined)*(Xcombined.')*(y.')
great, now our line is defined as Y = Xcombined*beta. to draw a line, simply sample from some range of x and add the b term
Xplot = [[0:.1:5].' ones(length([0:.1:5].'),1)];
Yplot = Xplot*beta;
plot(Xplot, Yplot);
So why does polyfit work so poorly? well, I cant say for sure, but my hypothesis is that you need to transpose your x and y matrixies. I would guess that that would give you a much more reasonable line.
x = x.';
y = y.';
then try
p = polyfit(x,y,n)
I hope this helps. A wise man once told me (and as I learn every day), don't trust an algorithm you do not understand!
Here's some test code that may help someone else dealing with linear regression and least squares
%https://youtu.be/m8FDX1nALSE matlab code
%https://youtu.be/1C3olrs1CUw good video to work out by hand if you want to test
function [a0 a1] = rtlinreg(x,y)
x=x(:);
y=y(:);
n=length(x);
a1 = (n*sum(x.*y) - sum(x)*sum(y))/(n*sum(x.^2) - (sum(x))^2); %a1 this is the slope of linear model
a0 = mean(y) - a1*mean(x); %a0 is the y-intercept
end
x=[65,65,62,67,69,65,61,67]'
y=[105,125,110,120,140,135,95,130]'
[a0 a1] = rtlinreg(x,y); %a1 is the slope of linear model, a0 is the y-intercept
x_model =min(x):.001:max(x);
y_model = a0 + a1.*x_model; %y=-186.47 +4.70x
plot(x,y,'x',x_model,y_model)
I am wondering how do I fit three points x = ([0.42 0.64 0.96]) and
y = ([4.2 5.1 6.0]) with y = k*x^(0.88)?
I tried [p,S,mu] = polyfit(x,y,0.88); but MATLAB says only power in integer numbers are accepted. Thanks.
EDIT:
The idea is I know these three points should follow the curve based on some theory, so I want to plot it to convince myself. Also, I'd like to do curve fitting because I don't know what k is.
What about lsqnonlin ?
You could try
model = #(x,k) (k*x.^0.88);
resVec = #(k) (y(:) - model(x(:),k));
k_start = 1;
k_opt = lsqnonlin(resVec,k_start);
If you're OK with adding a constant to your model you can do:
[p,S,mu] = polyfit(x.^(0.88),y,1);
then you'll have y approximated by p(2)*x.^(0.88)+p(1) (minimizing the sum of the squares of the errors)