OLS Regression with NeweyWest error term - linear-regression

I need to estimate a linear regression with the OLS method. Since I assume the error terms to be correlated, I would like to account for heteroskedasticity and autocorrelation in the error terms. I allready wrote a code, but since I am not good in R at all I wonder whether it is correct.
So far, my code is:
library(sandwich)
library(dynlm)
library(lmtest)
myregression <- dynlm(formula = x ~ y + z)
coeftest(myregression, vcov = NeweyWest(myregression))
Any help is highly appreciated!
Best regards, Jack

Related

Trouble plotting a solution curve with dsolve

I'm trying to do my assignments which is plotting a direction fields and a solution curve that passes a given point and having a trouble doing this particular differential equation: y'=1-x*y, y(0)=0.
My code is :
syms x y;
y1=dsolve('Dy=1-x*y','y(0)=0','x');
y1=expand(y1);
ezplot(y1,[-10 10 -10 10]);
And it got some error about the input i believe, it says:
Error using inlineeval and error in expression, input must be real and
full and so on...
I have had success with other differential equations but this one is still miserable.
I'm almost sure this is a comment, but since I still don't have enough reputation to do so I'm answering here :P
Apparently the solution y1 for your equation is something like
y1 = -(2^(1/2)*pi^(1/2)*exp(-x^2/2)*erf((2^(1/2)*x*1i)/2)*1i)/2
The erf function needs a real input, but in this case you have a 1i complex term which is causing the problem.

Need help solving a pde with finite elements in matlab

I have an initial value problem u''''=f
with u(0)=u'(0)=u(1)=u'(1)=0
It comes then by multiplying with test function v and integrating to following formulation:
integral( 0 to 1) u''v'' dx= integral (o to 1) fv
So the basis function are polynomials of degree 3
And those polynomials can be solved exact with the simpson rule.
So I want to write some code to get the stiffness matrix for this..
I find it very hard to start with ..I don't have much experience in Matlab and would kindly ask you to help getting me started.
looking forward for any help!

Fit a quadratic function of two variables (Practitioner Black Scholes Deterministic Volatility Functions)

I am attempting to fit the parameters of a deterministic volatility function for use in the practitioner Black Scholes model.
The formula for which I want to estimate the "a" parameters is:
sig = a0 + a1*K + a2*K^2 + a3*T + a4*T^2 + a5*KT
Where sig, K and T are known; I have multiple observations of K, T and sig combinations but only want a single set of "a" parameters.
How might I go about this? My google searches and own attempts all failed, unfortunately.
Thank you!
The function lsqcurvefit allows you to define the function that you want to fit. It should be straight forward from there on.
http://se.mathworks.com/help/optim/ug/lsqcurvefit.html
Some Mathematics
Notation stuff: index your observations by i and add an error term.
sig_i = a0 + a1*K_i + a2*K_i^2 + a3*T_i + a4*T_i^2 + a5*KT_i + e_i
Something probably not insane to do would be to minimize the square of the error term:
minimize (over a) \sum_i e_i^2
The solution to least squares is a simple linear algebra problem. (See https://stats.stackexchange.com/questions/186196/understanding-linear-algebra-in-ordinary-least-squares-derivation/186289#186289 for a solution if you really care.) (Further note: e_i is a linear function of a. I'm not sure why you would need lsqcurvefit as another answer suggested?)
Matlab Code for OLS (Ordinary Least Squares)
Assuming sig, K, T, and KT are n by 1 vectors
y = sig;
X = [ones(length(sig),1), K, K.^2, T, T.^2, KT];
a = X \ y; %basically computes a = inv(X'*X)*(X'*y) but in a better way
This an ordinary least squares regression of y on X.
Further Ideas
Depending on the distribution of your error terms, correlated error etc... regular OLS may be inefficient or possibly even inappropriate... I'm not familiar with the details of this problem to know. You may want to check what people do.
Eg. a technique that's less sensitive to big outliers is to minimize the absolute value of the error.
minimize (over a) \sum_i |a_i|
If you have a good, statistical model of how the data is generated you could do maximum likelihood estimation. Anyway... this rapidly devolve into a multi-quarter, statistics class.

How to fit equation of the form y=ao+a1logx+a2log(2/x)

I have an equation of the form y=ao+a1logx+a2log(2/x); Is there away to fit this kind of equations?
I tried to use polyfit but finding the coefficients ao,a1 and a2 is difficult for me.
Please Help me.
What toolboxes are available to you?
The easiest way would probably be the cftool. (Type it into your command window) if you have the curve fitting toolbox. But polyfit should do as well.
The main problem I see: Your coefficients are not independent of one another. Because log(2/x) is equal to log(2) - log(x) your equation becomes:
y = ao + a1*log(x) + a2*log(2) - a2*log(x);
which is equivalent to:
y = bo + b1*log(x);
Try that one.

lagrange interpolation

I checked the answers about Lagrange interpolation, but I couldn't find a suitable one to my question. I'm trying to use Lagrange interpolation for a surface with matlab. Let's say I have a x and y vector and f=f(x,y). I want to interpolate this f function. I think, what I did is mathematically correct:
function q = laginterp(x,y,f,ff)
n = length(x);
m = length(y);
v = zeros(size(ff));
for k = 1:n
for l = 1:m
w1 = ones(size(ff));
w2 = ones(size(ff))
for j = [1:k-1 k+1:n]
for j = [1:l-1 l+1:n]
w1 = (x-x(j))./(x(k)-x(j)).*w1;
w2 = (y-y(i))./(y(l)-y(i)).*w2;
end
end
ff = ff + w1.*w2.*f(k,l);
end
end
It is my function and then I'm waiting for an answer for any given x,y,f like
x= 0:4;
y = [-6 -3 -1 6];
f=[2 9 4 25 50];
v = laginterp(x,y,f,ff);
plot3(x,y,'o',f,q,'-')
I'm always grateful for any help!
Lagrange interpolation is essentially NEVER a good choice for interpolation. Yes, it is used in the first chapter of many texts that discuss interpolation. Does that make it good? No. That just makes it convenient, a good way to INTRODUCE ideas of interpolation, and sometimes to prove some simple results.
A serious problem is that a user decides to try this miserable excuse for an interpolation method, and finds that lo and behold, it does work for 2 or 3 points. Wow, look at that! So the obvious continuation is to use it on their real data sets with 137 points, or 10000 data points or more, some of which points are usually replicates. What happened? Why does my code not give good results? Or, maybe they will just blindly assume that it did work, and then publish a paper containing meaningless results.
Yes, there is a Lagrange tool on the File Exchange. Yes, it even probably got some good reviews, written by first year students who had no real idea what they were looking at, and who sadly have no concept of numerical analysis. Don't use it.
If you need an interpolation tool in MATLAB, you could start with griddata or TriScatteredInterp. These will yield quite reasonable results. Other methods are radial basis function interpolations, of which there is also a tool on the FEX, and a wide variety of splines, my personal favorite. Note that ANY interpolation, used blindly without understanding or appreciation of the pitfalls can and will produce meaningless results. But this is true of almost any numerical method.
This doesn't address your question directly, but there's a Lagrange interpolation function on the Matlab File Exchange which seems pretty popular.