I'm trying to build an ARMAX model which predicts reservoir water elevation as a function of previous elevations and an upstream inflow. My data is on a timestep of roughly 0.041 days, but it does vary slightly, and I have 3643 time series points. I've tried using the basic armax Matlab command, but am getting this error:
Error using armax (line 90)
Operands to the || and && operators must be convertible to
logical scalar values.
The code I'm trying is:
data = iddata(y,x,[],'SamplingInstants',JDAYs)
m1 = armax(data, [30 30 30 1])
where y is a vector of elevations that starts like y=[135.780
135.800
135.810
135.820
135.820
135.830]', x is a vector of flowrates that starts like x=[238.865
238.411
238.033
237.223
237.223
233.828]', and JDAYs is a vector of timestamps that starts like JDAYs=[122.604
122.651
122.688
122.729
122.771
122.813]'.
I'm new to this model type and the system identification toolbox, so I'm having issues figuring out what's causing that error. The Matlab examples aren't very helpful...
I hope this is not getting to you a bit late.
Checking your code i see that you are using a parameter called SamplingInstants. I'm not sure ARMAX functions works with it. Actually i'm sure. I have tried several times, and no, it doesn't. And it don't seems to be a well documented option for ARMAX -or for other methods- too.
The ARX, ARMAX, and other models are based on linear discrete systems from the Z-Transform formalism, that is, one can ussualy assume that your system has been sampled under a regular sampling rate. Although of course, this is not a law, this is the standard framework when dealing with linear -and also non-linear- systems. And also most industrial control & acquisition systems work under a regular rate sampling. Yet.
Try to get inside the ARMAX standard setting, like this:
y=[135.780 135.800 135.810 135.820 135.820 135.830 .....]';
x=[238.865 238.411 238.033 237.223 237.223 233.828 .....]';
%JDAYs=[122.604 122.651 122.688 122.729 122.771 122.813 .....]';
JDAYs=122.601+[0:length(y)-1]*4.18';
data = iddata(y,x,[],'SamplingInstants',JDAYs);
m1 = armax(data, [30 30 30 1])
And this will always work. Please just ensure that x and y are long enough to enable the proper estimation of all the free coefficients, greater than mean(4*orders), for ARMAX to work -in this case, greater than 121-, and desirable greater than 10*mean(4*orders), for ARMAX algorithm to properly solve your problem, and enough time-variant for prevent reaching onto ill-conditioned solutions.
Good Luck ;)...
Related
There is a [Q,R] = qr(A,0) function in Matlab, which, according to documentation, returns an "economy" version of qr-decomposition of A. norm(A-Q*R) returns ~1e-12 for my data set. Also Q'*Q should theoretically return I. In practice there are small nonzero elements above and below the diagonal (of the order of 1e-6 or so), as well as diagonal elements that are slightly greater than 1 (again, by 1e-6 or so). Is anyone aware of a way to control precision of qr(.,0), or quality(orthogonality) of resulting Q, either by specifying epsilon, or via the number of iterations ? The size of the data set makes qr(A) run out of memory so I have to use qr(A,0).
When I try the non- economy setting, I actually get comparable results for A-Q*R. Even for a tiny matrix containing small numbers as shown here:
A = magic(20);
[Q, R] = qr(A); %Result does not change when using qr(A,0)
norm(A-Q*R)
As such I don't believe the 'economy' is the problem as confirmed by #horchler in the comments, but that you have just ran into the limits of how accurate calculations can be done with data of type 'double'.
Even if you change the accuracy somehow, you will always be dealing with an approximation, so perhaps the first thing to consider here is whether you really need greater accuracy than you already have. If you need more accuracy there may always be a way, but I doubt whether it will be a straightforward one.
I have a large vector of recorded data which I need to resample. The problem I encounter is that when using resample, I get the following error:
??? Error using ==> upfirdn at 82 The product of the downsample factor
Q and the upsample factor P must be less than 2^31.
Now, I understand why this is happening - my two sampling rates are very close together, so the integer factors need to be quite large (something like 73999/74000). Unfortunately this means that the appropriate filter can't be created by MATLAB. I also tried resampling just up, with the intention of then resampling down, but there is not enough memory to do this to even 1 million samples of data (mine is 93M).
What other methods could I use to properly resample this data?
An interpolated polyphase FIR filter can be used to interpolate just the new set of sample points without using an upsampling+downsampling process.
But if performance is completely unimportant, here's a Quick and Dirty windowed-Sinc interpolator in Basic.
here's my code, I hope it helps :
function resig = resamplee(sig,upsample,downsample)
if upsample*downsample<2^31
resig = resample(sig,upsample,downsample);
else
sig1half=sig(1:floor(length(sig)/2));
sig2half=sig(floor(length(sig)/2):end);
resig1half=resamplee(sig1half,floor(upsample/2),length(sig1half));
resig2half=resamplee(sig2half,upsample-floor(upsample/2),length(sig2half));
resig=[resig1half;resig2half];
end
first a little background. I'm a psychology student so my background in coding isn't on par with you guys :-)
My problem is as follow and the most important observation is that curve fitting with 2 different programs gives completly different results for my parameters, altough my graphs stay the same. The main program we have used to fit my longitudinal data is kaleidagraph and this should be seen as kinda the 'golden standard', the program I'm trying to modify is matlab.
I was trying to be smart and wrote some code (a lot at least for me) and the goal of that code was the following:
1. Taking an individual longitudinal datafile
2. curve fitting this data on a non-parametric model using lsqcurvefit
3. obtaining figures and the points where f' and f'' are zero
This all worked well (woohoo :-)) but when I started comparing the function parameters both programs generate there is a huge difference. The kaleidagraph program stays close to it's original starting values. Matlab wanders off and sometimes gets larger by a factor 1000. The graphs stay however more or less the same in both situations and both fit the data well. However it would be lovely if I would know how to make the matlab curve fitting more 'conservative' and more located near it's original starting values.
validFitPersons = true(nbValidPersons,1);
for i=1:nbValidPersons
personalData = data{validPersons(i),3};
personalData = personalData(personalData(:,1)>=minAge,:);
% Fit a specific model for all valid persons
try
opts = optimoptions(#lsqcurvefit, 'Algorithm', 'levenberg-marquardt');
[personalParams,personalRes,personalResidual] = lsqcurvefit(heightModel,initialValues,personalData(:,1),personalData(:,2),[],[],opts);
catch
x=1;
end
Above is a the part of the code i've written to fit the datafiles into a specific model.
Below is an example of a non-parametric model i use with its function parameters.
elseif strcmpi(model,'jpa2')
% y = a.*(1-1/(1+(b_1(t+e))^c_1+(b_2(t+e))^c_2+(b_3(t+e))^c_3))
heightModel = #(params,ages) abs(params(1).*(1-1./(1+(params(2).* (ages+params(8) )).^params(5) +(params(3).* (ages+params(8) )).^params(6) +(params(4) .*(ages+params(8) )).^params(7) )));
modelStrings = {'a','b1','b2','b3','c1','c2','c3','e'};
% Define initial values
if strcmpi('male',gender)
initialValues = [176.76 0.339 0.1199 0.0764 0.42287 2.818 18.52 0.4363];
else
initialValues = [161.92 0.4173 0.1354 0.090 0.540 2.87 14.281 0.3701];
end
I've tried to mimick the curve fitting process in kaleidagraph as good as possible. There I've found they use the levenberg-marquardt algorithm which I've selected. However results still vary and I don't have any more clues about how I can change this.
Some extra adjustments:
The idea for this code was the following:
I'm trying to compare different fitting models (they are designed for this purpose). So what I do is I have 5 models with different parameters and different starting values ( the second part of my code) and next I have the general curve fitting file. Since there are different models it would be interesting if I could put restrictions into how far my starting values could wander off.
Anyone any idea how this could be done?
Anybody willing to help a psychology student?
Cheers
This is a common issue when dealing with non-linear models.
If I were, you, I would try to check if you can remove some parameters from the model in order to simplify it.
If you really want to keep your solution not too far from the initial point, you can use upper bounds and lower bounds for each variable:
x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub)
defines a set of lower and upper bounds on the design variables in x so that the solution is always in the range lb ≤ x ≤ ub.
Cheers
You state:
I'm trying to compare different fitting models (they are designed for
this purpose). So what I do is I have 5 models with different
parameters and different starting values ( the second part of my code)
and next I have the general curve fitting file.
You will presumably compare the statistics from fits with different models, to see whether reductions in the fitting error are unlikely to be due to chance. You may want to rely on that comparison to pick the model that not only fits your data suitably but is also simplest (which is often referred to as the principle of parsimony).
The problem is really with the model you have shown resulting in correlated parameters and therefore overfitting, as mentioned by #David. Again, this should be resolved when you compare different models and find that some do just as well (statistically speaking) even though they involve fewer parameters.
edit
To drive the point home regarding the problem with the choice of model, here are (1) results of a trial fit using simulated data (2) the correlation matrix of the parameters in graphical form:
Note that absolute values of the correlation close to 1 indicate strongly correlated parameters, which is highly undesirable. Note also that the trend in the data is practically linear over a long portion of the dataset, which implies that 2 parameters might suffice over that stretch, so using 8 parameters to describe it seems like overkill.
I am having difficulty achieving sufficient accuracy in a root-finding problem on Matlab. I have a function, Lik(k), and want to find the value of k where Lik(k)=L0. Basically, the problem is that various built-in Matlab solvers (fzero, fminbnd, fmincon) are not getting as close to the solution as I would like or expect.
Lik() is a user-defined function which involves extensive coding to compute a numerical inverse Laplace transform, etc., and I therefore do not include the full code. However, I have used this function extensively and it appears to work properly. Lik() actually takes several input parameters, but for the current step, all of these are fixed except k. So it is really a one-dimensional root-finding problem.
I want to find the value of k >= 165.95 for which Lik(k)-L0 = 0. Lik(165.95) is less than L0 and I expect Lik(k) to increase monotonically from here. In fact, I can evaluate Lik(k)-L0 in the range of interest and it appears to smoothly cross zero: e.g. Lik(165.95)-L0 = -0.7465, ..., Lik(170.5)-L0 = -0.1594, Lik(171)-L0 = -0.0344, Lik(171.5)-L0 = 0.1015, ... Lik(173)-L0 = 0.5730, ..., Lik(200)-L0 = 19.80. So it appears that the function is behaving nicely.
However, I have tried to "automatically" find the root with several different methods and the accuracy is not as good as I would expect...
Using fzero(#(k) Lik(k)-L0): If constrained to the interval (165.95,173), fzero returns k=170.96 with Lik(k)-L0=-0.045. Okay, although not great. And for practical purposes, I would not know such a precise upper bound without a lot of manual trial and error. If I use the interval (165.95,200), fzero returns k=167.19 where Lik(k)-L0 = -0.65, which is rather poor. I have been running these tests with Display set to iter so I can see what's going on, and it appears that fzero hits 167.19 on the 4th iteration and then stays there on the 5th iteration, meaning that the change in k from one iteration to the next is less than TolX (set to 0.001) and thus the procedure ends. The exit flag indicates that it successfully converged to a solution.
I also tried minimizing abs(Lik(k)-L0) using fminbnd (giving upper and lower bounds on k) and fmincon (giving a starting point for k) and ran into similar accuracy issues. In particular, with fmincon one can set both TolX and TolFun, but playing around with these (down to 10^-6, much higher precision than I need) did not make any difference. Confusingly, sometimes the optimizer even finds a k-value on an earlier iteration that is closer to making the objective function zero than the final k-value it returns.
So, it appears that the algorithm is iterating to a certain point, then failing to take any further step of sufficient size to find a better solution. Does anyone know why the algorithm does not take another, larger step? Is there anything I can adjust to change this? (I have looked at the list under optimset but did not come up with anything useful.)
Thanks a lot!
As you seem to have a 'wild' function that does appear to be monotone in the region, a fairly small range of interest, and not a very high requirement in precision I think all criteria are met for recommending the brute force approach.
Assuming it does not take too much time to evaluate the function in a point, please try this:
Find an upperbound xmax and a lower bound xmin, choose a preferred stepsize and evaluate your function at
xmin:stepsize:xmax
If required (and monotonicity really applies) you can get another upper and lower bound by doing this and repeat the process for better accuracy.
I also encountered this problem while using fmincon. Here is how I fixed it.
I needed to find the solution of a function (single variable) within an optimization loop (multiple variables). Because of this, I needed to provide a large interval for the solution of the single variable function. The problem is that fmincon (or fzero) does not converge to a solution if the search interval is too large. To get past this, I solve the problem inside a while loop, with a huge starting upperbound (1e200) with the constraint made on the fval value resulting from the solver. If the resulting fval is not small enough, I decrease the upperbound by a factor. The code looks something like this:
fval = 1;
factor = 1;
while fval>1e-7
UB = factor*1e200;
[x,fval,exitflag] = fminbnd(#(x)function(x,...),LB,UB,options);
factor = factor * 0.001;
end
The solver exits the while when a good solution is found. You can of course play also with the LB by introducing another factor and/or increase the factor step.
My 1st language isn't English so I apologize for any mistakes made.
Cheers,
Cristian
Why not use a simple bisection method? You always evaluate the middle of a certain interval and then reduce this to the right or left part so that you always have one bound giving a negative and the other bound giving a positive value. You can reduce to arbitrary precision very quickly. Since you reduce the interval in half each time it should converge very quickly.
I would suspect however there is some other problem with that function in that it has discontinuities. It seems strange that fzero would work so badly. It's a deterministic function right?
I have 2 sets of data of float numbers, set A and set B. Both of them are matrices of size 40*40. I would like to find out which set is closer to the normal distribution. I know how to use probplot() in matlab to plot the probability of one set. However, I do not know how to find out the level of the fitness of the distribution is.
In python, when people use problot, a parameter ,R^2, shows how good the distribution of the data is against to the normal distribution. The closer the R^2 value to value 1, the better the fitness is. Thus, I can simply use the function to compare two set of data by their R^2 value. However, because of some machine problem, I can not use the python in my current machine. Is there such parameter or function similar to the R^2 value in matlab ?
Thank you very much,
Fitting a curve or surface to data and obtaining the goodness of fit, i.e., sse, rsquare, dfe, adjrsquare, rmse, can be done using the function fit. More info here...
The approach of #nate (+1) is definitely one possible way of going about this problem. However, the statistician in me is compelled to suggest the following alternative (that does, alas, require the statistics toolbox - but you have this if you have the student version):
Given that your data is Normal (not Multivariate normal), consider using the Jarque-Bera test.
Jarque-Bera tests the null hypothesis that a given dataset is generated by a Normal distribution, versus the alternative that it is generated by some other distribution. If the Jarque-Bera test statistic is less than some critical value, then we fail to reject the null hypothesis.
So how does this help with the goodness-of-fit problem? Well, the larger the test statistic, the more "non-Normal" the data is. The smaller the test statistic, the more "Normal" the data is.
So, assuming you have converted your matrices into two vectors, A and B (each should be 1600 by 1 based on the dimensions you provide in the question), you could do the following:
%# Build sample data
A = randn(1600, 1);
B = rand(1600, 1);
%# Perform JB test
[ANormal, ~, AStat] = jbtest(A);
[BNormal, ~, BStat] = jbtest(B);
%# Display result
if AStat < BStat
disp('A is closer to normal');
else
disp('B is closer to normal');
end
As a little bonus of doing things this way, ANormal and BNormal tell you whether you can reject or fail to reject the null hypothesis that the sample in A or B comes from a normal distribution! Specifically, if ANormal is 1, then you fail to reject the null (ie the test statistic indicates that A is probably drawn from a Normal). If ANormal is 0, then the data in A is probably not generated from a Normal distribution.
CAUTION: The approach I've advocated here is only valid if A and B are the same size, but you've indicated in the question that they are :-)