Arima antipersistence - persistence

I’m running RStudio Version 1.1.419 with R-3.4.3 on Windows 10. I am trying to fit an (f)arima model and setting the fractional differencing parameter during the optimization process to be between (-0.5,0.5), i.e. allowing for antipersistence (d < 0), short memory (d = 0) and long memory (d > 0). I have tried multiple functions to accomplish that. I am aware that the default of fracdiff$drange is (0,0.5). Therefore this ...
> result <- fracdiff(MeanPrice, nar = 2, nma = 1, drange = c(-0.5,0.5))
sadly returns this..
Warning: C fracdf() optimization failure
Warning message: unable to compute correlation matrix; maybe change 'h'
Is there a way to fit fracdiff or other models (maybe arfima::arfima()?) with that drange? Your help is very much appreciated.

If you look at the package documentation, it states that the h argument for fracdiff "is used to compute a finite difference approximation to the Hessian, and
hence only influences the cov, cor, and std.error computations." However, as they are referring to the Hessian, I would assume that this affects the results of the MLE. There are other functions in that package that may be helpful: fdGHP for estimating the order of fractional differencing based on the Geweke and Porter-Hudak method, and similarly fdSperio.
Take a look at the forecast package. If you estimate the order of fractional differencing using the above mentioned functions, you might be able to use the same method described in the details of the arfima function.

Related

Matrix Multiplication Simplification not giving same results in code

I want to evaluate the expression xTAx -2yTAx+yTAy in Matlab.
If my calculations are correct, this should be equivalent to solving(x-y)TA(x-y)
My main problem right now is that I am not getting the same results from the above two expressions. I'm wondering if it's a precision error or a conceptual error.
I tried a very simple example with some random values to check my math was right. The mini example I used is as follows
A = [1,2,0;2,1,2;0,2,1];
x = [1;2;3];
y = [4;4;4];
(transpose(x)*A*x)-(2*transpose(y)*A*x)+(transpose(y)*A*(y))
transpose(x-y)*A*(x-y)
Both give 46.
I then tried a more realistic example of values for what I'm doing and it failed. For example
A = [2.66666666666667,-0.333333333333333,0,-0.333333333333333,-0.333333333333333,0,0,0,0;
-0.333333333333333,2.66666666666667,-0.333333333333333,-0.333333333333333,-0.333333333333333,-0.333333333333333,0,0,0;
0,-0.333333333333333,2.66666666666667,0,-0.333333333333333,-0.333333333333333,0,0,0;
-0.333333333333333,-0.333333333333333,0,2.66666666666667,-0.333333333333333,0,-0.333333333333333,-0.333333333333333,0;
-0.333333333333333,-0.333333333333333,-0.333333333333333,-0.333333333333333,2.66666666666667,-0.333333333333333,-0.333333333333333,-0.333333333333333 -0.333333333333333;
0,-0.333333333333333,-0.333333333333333,0,-0.333333333333333,2.66666666666667,0,-0.333333333333333,-0.333333333333333;
0,0,0,-0.333333333333333,-0.333333333333333,0,2.66666666666667,-0.333333333333333,0;
0,0,0,-0.333333333333333,-0.333333333333333,-0.333333333333333,-0.333333333333333,2.66666666666667,-0.333333333333333;
0,0,0,0,-0.333333333333333,-0.333333333333333,0,-0.333333333333333,2.66666666666667];
x =[1.21585420370805;
1.00388159497757e-16;
-0.405284734569351;
1.36809776609658e-16;
-1.04796659533634e-17;
-7.52459042423650e-17;
-0.607927101854027;
-8.49163704356314e-17;
0.303963550927013];
v =[0.0319067068305797,0.00786616506360615,0.0709811622828447,0.0719615328671117;
1.26150800194897e-17,5.77584497721720e-18,7.89740111567879e-18,7.14396333930938e-18;
-0.158358815125228,-0.876275686098803,0.0539216716399594,0.0450616819309899;
7.90937837037453e-18,3.24196519177793e-18,3.99402664932776e-18,4.17486202509670e-18;
5.35533279761622e-18,-8.91723780863019e-19,-1.56128480212603e-18,1.84423677629470e-19;
-2.18478057810330e-17,-6.63779320738873e-18,-3.21099714760257e-18,-3.93612240449303e-18;
-0.0213963226092681,-0.0168256143048771,-0.0175695110350900,-0.0128155908603601;
-4.06029341772399e-18,-5.65705978843172e-18,-1.80182480882056e-18,-1.59281757789645e-18;
0.221482525259710,-0.0576644539359728,0.0163934384910353,0.0197432976432437];
u = [1.37058079022968;
1.79302486419321;
69.4330887239959;
-52.3949662214410];
y = v*u;
Gives 0 for the first expression and 7.1387e-28 for the second. Is this a precision error thing? Which version is better/ more accurate to use and why? Thank you!
I have not checked your simplification, but floating-point math in general is inexact. Even doing the same operations in a different order can give you different results. With the deviations you are seeing I would believe they could reasonably come from inexactness in floating-point math and it is up to you to decide whether the deviations are acceptable for your purpose.
To determine which version is more accurate you would have to compute reference values to compare to, which might prove difficult.

Dimensionality reduction using PCA - MATLAB

I am trying to reduce dimensionality of a training set using PCA.
I have come across two approaches.
[V,U,eigen]=pca(train_x);
eigen_sum=0;
for lamda=1:length(eigen)
eigen_sum=eigen_sum+eigen(lamda,1);
if(eigen_sum/sum(eigen)>=0.90)
break;
end
end
train_x=train_x*V(:, 1:lamda);
Here, I simply use the eigenvalue matrix to reconstruct the training set with lower amount of features determined by principal components describing 90% of original set.
The alternate method that I found is almost exactly the same, save the last line, which changes to:
train_x=U(:,1:lamda);
In other words, we take the training set as the principal component representation of the original training set up to some feature lamda.
Both of these methods seem to yield similar results (out of sample test error), but there is difference, however minuscule it may be.
My question is, which one is the right method?
The answer depends on your data, and what you want to do.
Using your variable names. Generally speaking is easy to expect that the outputs of pca maintain
U = train_x * V
But this is only true if your data is normalized, specifically if you already removed the mean from each component. If not, then what one can expect is
U = train_x * V - mean(train_x * V)
And in that regard, weather you want to remove or maintain the mean of your data before processing it, depends on your application.
It's also worth noting that even if you remove the mean before processing, there might be some small difference, but it will be around floating point precision error
((train_x * V) - U) ./ U ~~ 1.0e-15
And this error can be safely ignored

How to build an ARMAX model in Matlab

I'm trying to build an ARMAX model which predicts reservoir water elevation as a function of previous elevations and an upstream inflow. My data is on a timestep of roughly 0.041 days, but it does vary slightly, and I have 3643 time series points. I've tried using the basic armax Matlab command, but am getting this error:
Error using armax (line 90)
Operands to the || and && operators must be convertible to
logical scalar values.
The code I'm trying is:
data = iddata(y,x,[],'SamplingInstants',JDAYs)
m1 = armax(data, [30 30 30 1])
where y is a vector of elevations that starts like y=[135.780
135.800
135.810
135.820
135.820
135.830]', x is a vector of flowrates that starts like x=[238.865
238.411
238.033
237.223
237.223
233.828]', and JDAYs is a vector of timestamps that starts like JDAYs=[122.604
122.651
122.688
122.729
122.771
122.813]'.
I'm new to this model type and the system identification toolbox, so I'm having issues figuring out what's causing that error. The Matlab examples aren't very helpful...
I hope this is not getting to you a bit late.
Checking your code i see that you are using a parameter called SamplingInstants. I'm not sure ARMAX functions works with it. Actually i'm sure. I have tried several times, and no, it doesn't. And it don't seems to be a well documented option for ARMAX -or for other methods- too.
The ARX, ARMAX, and other models are based on linear discrete systems from the Z-Transform formalism, that is, one can ussualy assume that your system has been sampled under a regular sampling rate. Although of course, this is not a law, this is the standard framework when dealing with linear -and also non-linear- systems. And also most industrial control & acquisition systems work under a regular rate sampling. Yet.
Try to get inside the ARMAX standard setting, like this:
y=[135.780 135.800 135.810 135.820 135.820 135.830 .....]';
x=[238.865 238.411 238.033 237.223 237.223 233.828 .....]';
%JDAYs=[122.604 122.651 122.688 122.729 122.771 122.813 .....]';
JDAYs=122.601+[0:length(y)-1]*4.18';
data = iddata(y,x,[],'SamplingInstants',JDAYs);
m1 = armax(data, [30 30 30 1])
And this will always work. Please just ensure that x and y are long enough to enable the proper estimation of all the free coefficients, greater than mean(4*orders), for ARMAX to work -in this case, greater than 121-, and desirable greater than 10*mean(4*orders), for ARMAX algorithm to properly solve your problem, and enough time-variant for prevent reaching onto ill-conditioned solutions.
Good Luck ;)...

Matlab: poor accuracy of optimizers/solvers

I am having difficulty achieving sufficient accuracy in a root-finding problem on Matlab. I have a function, Lik(k), and want to find the value of k where Lik(k)=L0. Basically, the problem is that various built-in Matlab solvers (fzero, fminbnd, fmincon) are not getting as close to the solution as I would like or expect.
Lik() is a user-defined function which involves extensive coding to compute a numerical inverse Laplace transform, etc., and I therefore do not include the full code. However, I have used this function extensively and it appears to work properly. Lik() actually takes several input parameters, but for the current step, all of these are fixed except k. So it is really a one-dimensional root-finding problem.
I want to find the value of k >= 165.95 for which Lik(k)-L0 = 0. Lik(165.95) is less than L0 and I expect Lik(k) to increase monotonically from here. In fact, I can evaluate Lik(k)-L0 in the range of interest and it appears to smoothly cross zero: e.g. Lik(165.95)-L0 = -0.7465, ..., Lik(170.5)-L0 = -0.1594, Lik(171)-L0 = -0.0344, Lik(171.5)-L0 = 0.1015, ... Lik(173)-L0 = 0.5730, ..., Lik(200)-L0 = 19.80. So it appears that the function is behaving nicely.
However, I have tried to "automatically" find the root with several different methods and the accuracy is not as good as I would expect...
Using fzero(#(k) Lik(k)-L0): If constrained to the interval (165.95,173), fzero returns k=170.96 with Lik(k)-L0=-0.045. Okay, although not great. And for practical purposes, I would not know such a precise upper bound without a lot of manual trial and error. If I use the interval (165.95,200), fzero returns k=167.19 where Lik(k)-L0 = -0.65, which is rather poor. I have been running these tests with Display set to iter so I can see what's going on, and it appears that fzero hits 167.19 on the 4th iteration and then stays there on the 5th iteration, meaning that the change in k from one iteration to the next is less than TolX (set to 0.001) and thus the procedure ends. The exit flag indicates that it successfully converged to a solution.
I also tried minimizing abs(Lik(k)-L0) using fminbnd (giving upper and lower bounds on k) and fmincon (giving a starting point for k) and ran into similar accuracy issues. In particular, with fmincon one can set both TolX and TolFun, but playing around with these (down to 10^-6, much higher precision than I need) did not make any difference. Confusingly, sometimes the optimizer even finds a k-value on an earlier iteration that is closer to making the objective function zero than the final k-value it returns.
So, it appears that the algorithm is iterating to a certain point, then failing to take any further step of sufficient size to find a better solution. Does anyone know why the algorithm does not take another, larger step? Is there anything I can adjust to change this? (I have looked at the list under optimset but did not come up with anything useful.)
Thanks a lot!
As you seem to have a 'wild' function that does appear to be monotone in the region, a fairly small range of interest, and not a very high requirement in precision I think all criteria are met for recommending the brute force approach.
Assuming it does not take too much time to evaluate the function in a point, please try this:
Find an upperbound xmax and a lower bound xmin, choose a preferred stepsize and evaluate your function at
xmin:stepsize:xmax
If required (and monotonicity really applies) you can get another upper and lower bound by doing this and repeat the process for better accuracy.
I also encountered this problem while using fmincon. Here is how I fixed it.
I needed to find the solution of a function (single variable) within an optimization loop (multiple variables). Because of this, I needed to provide a large interval for the solution of the single variable function. The problem is that fmincon (or fzero) does not converge to a solution if the search interval is too large. To get past this, I solve the problem inside a while loop, with a huge starting upperbound (1e200) with the constraint made on the fval value resulting from the solver. If the resulting fval is not small enough, I decrease the upperbound by a factor. The code looks something like this:
fval = 1;
factor = 1;
while fval>1e-7
UB = factor*1e200;
[x,fval,exitflag] = fminbnd(#(x)function(x,...),LB,UB,options);
factor = factor * 0.001;
end
The solver exits the while when a good solution is found. You can of course play also with the LB by introducing another factor and/or increase the factor step.
My 1st language isn't English so I apologize for any mistakes made.
Cheers,
Cristian
Why not use a simple bisection method? You always evaluate the middle of a certain interval and then reduce this to the right or left part so that you always have one bound giving a negative and the other bound giving a positive value. You can reduce to arbitrary precision very quickly. Since you reduce the interval in half each time it should converge very quickly.
I would suspect however there is some other problem with that function in that it has discontinuities. It seems strange that fzero would work so badly. It's a deterministic function right?

Replacement for repmat in MATLAB

I have a function which does the following loop many, many times:
for cluster=1:max(bins), % bins is a list in the same format as kmeans() IDX output
select=bins==cluster; % find group of values
means(select,:)=repmat_fast_spec(meanOneIn(x(select,:)),sum(select),1);
% (*, above) for each point, write the mean of all points in x that
% share its label in bins to the equivalent row of means
delta_x(select,:)=x(select,:)-(means(select,:));
%subtract out the mean from each point
end
Noting that repmat_fast_spec and meanOneIn are stripped-down versions of repmat() and mean(), respectively, I'm wondering if there's a way to do the assignment in the line labeled (*) that avoids repmat entirely.
Any other thoughts on how to squeeze performance out of this thing would also be welcome.
Here is a possible improvement to avoid REPMAT:
x = rand(20,4);
bins = randi(3,[20 1]);
d = zeros(size(x));
for i=1:max(bins)
idx = (bins==i);
d(idx,:) = bsxfun(#minus, x(idx,:), mean(x(idx,:)));
end
Another possibility:
x = rand(20,4);
bins = randi(3,[20 1]);
m = zeros(max(bins),size(x,2));
for i=1:max(bins)
m(i,:) = mean( x(bins==i,:) );
end
dd = x - m(bins,:);
One obvious way to speed up calculation in MATLAB is to make a MEX file. You can compile C code and perform any operations you want. If you're searching for the fastest-possible performance, turning the operation into a custom MEX file would likely be the way to go.
You may be able to get some improvement by using ACCUMARRAY.
%# gather array sizes
[nPts,nDims] = size(x);
nBins = max(bins);
%# calculate means. Not sure whether it might be faster to loop over nDims
meansCell = accumarray(bins,1:nPts,[nBins,1],#(idx){mean(x(idx,:),1)},{NaN(1,nDims)});
means = cell2mat(meansCell);
%# subtract cluster means from x - this is how you can avoid repmat in your code, btw.
%# all you need is the array with cluster means.
delta_x = x - means(bins,:);
First of all: format your code properly, surround any operator or assignment by whitespace. I find your code very hard to comprehend as it looks like a big blob of characters.
Next of all, you could follow the other responses and convert the code to C (mex) or Java, automatically or manually, but in my humble opinion this is a last resort. You should only do such things when your performance is not there yet by a small margin. On the other hand, your algorithm doesn't show obvious flaws.
But the first thing you should do when trying to improve performance: profile. Use the MATLAB profiler to determine which part of your code is causing your problems. How much would you need to improve this to meet your expectations? If you don't know: first determine this boundary, otherwise you will be looking for a needle in a hay stack which might not even be in there in the first place. MATLAB will never be the fastest kid on the block with respect to runtime, but it might be the fastest with respect to development time for certain kinds of operations. In that respect, it might prove useful to sacrifice the clarity of MATLAB over the execution speed of other languages (C or even Java). But in the same respect, you might as well code everything in assembler to squeeze all of the performance out of the code.
Another obvious way to speed up calculation in MATLAB is to make a Java library (similar to #aardvarkk's answer) since MATLAB is built on Java and has very good integration with user Java libraries.
Java's easier to interface and compile than C. It might be slower than C in some cases, but the just-in-time (JIT) compiler in the Java virtual machine generally speeds things up very well.