How to forecast electricity consumption using MATLAB's command "forecast"? - matlab

I have the data of electricity consumption of a region during the year of 2017. So I have to matrix 1x1, one with the month and other with the consumption. I want to use the command forecast to forecast the consumption of the first month of 2018, but I don't know how to do this even after reading the examples on MATLAB's help page.
Example:
data = {1166974.25000000, 1132479.36000000, 1137173.86000000, 1145853.58000000, 1118875.72000000, 1071456.85000000 ,1047171.87000000, 1071179.65000000 ,1077986.32000000 ,1112111.10000000, 1149668.47000000 ,1161649.19000000, 1175576.25000000 ,1126753.31000000 ,1204843.11000000 ,1183946.03000000, 1153080.36000000, 1120182.07000000, 1104726.03000000 ,1108110.02000000 ,1137729.28000000 ,1189699.45000000, 1252975.55000000, 1218118.20000000 ,1259580 ,1208193 ,1194430, 1244458, 1218867, 1205705 ,1177362, 1185584, 1164758, 1226991 ,1286044 ,1305312, 1360681.70000000 ,1332020 ,1306497.90000000 ,1299819.10000000 ,1316167.70000000 ,1246959.40000000 ,1256700.20000000 ,1266490.60000000, 1275642.90000000, 1358839.80000000, 1361440.10000000, 1398059.40000000};
data = [data{:}];
sys = ar(data,4)
K = 49;
p = forecast(sys,data,K);
plot(data,'b',p,'r'), legend('measured','forecasted')
Why does this not work?

I hope you found a solution to your problem. If you have not, maybe I can be of assistance.
MathWork's documentation of the function notes that the "PastData" entry (labeled "data" in your code) can either be an iddata object or an N x N_y matrix of doubles. Your implementation uses a matrix, so I decided to try out the code with an iddata object.
rawdat = [1166974.25000000, 1132479.36000000, 1137173.86000000, 1145853.58000000, 1118875.72000000, 1071456.85000000 ,1047171.87000000, 1071179.65000000 ,1077986.32000000 ,1112111.10000000, 1149668.47000000 ,1161649.19000000, 1175576.25000000 ,1126753.31000000 ,1204843.11000000 ,1183946.03000000, 1153080.36000000, 1120182.07000000, 1104726.03000000 ,1108110.02000000 ,1137729.28000000 ,1189699.45000000, 1252975.55000000, 1218118.20000000 ,1259580 ,1208193 ,1194430, 1244458, 1218867, 1205705 ,1177362, 1185584, 1164758, 1226991 ,1286044 ,1305312, 1360681.70000000 ,1332020 ,1306497.90000000 ,1299819.10000000 ,1316167.70000000 ,1246959.40000000 ,1256700.20000000 ,1266490.60000000, 1275642.90000000, 1358839.80000000, 1361440.10000000, 1398059.40000000];
data = iddata(rawdat',[]);
sys = ar(data,4);
K = 49;
p = forecast(sys,data,K);
plot(data,'b',p,'r'), legend('measured','forecasted')
Notice that I also changed the initial data's variable name and type.
The above code leads to the following figure.
Please update us. Thanks.

Related

AR terms in SUR models - Matlab

I am trying to estimate a SUR model of the form
y_{1,t} = \alpha_1 +\beta_1 x_{1,t} + \beta_2 x_{2,t} + \beta_3 y_{1,t-1} +\epsilon_{1,t}
y_{2,t} = \alpha_2 +\beta_4 x_{1,t} + \beta_5 x_{2,t} + \beta_6 y_{2,t-1} +\epsilon_{2,t}
Define mY = [y_1 y_2] and mX = [x_1 x_2].
For this purpose I am doing
iT = size(mY,1); iN = size(mY,2);
mXsur = kron(mX, eye(iN));
mXsurCell = mat2cell(mXsur, iN*ones(iT,1));
iR = size(mXsur,2);
Mdl = vgxset('n', iN, 'nAR',1, 'nX',iR,'Constant',true);
[SurOutput, SurSDerror, ~,SURcov] = vgxvarx(Mdl, mY, mXsurCell);
The issue is that the bit of code nAR, 1 seems to add 1 lag of both y variables to each equation and I only wish to add one per equation. Is there a quick way to do this?
(Of course I can include the lagged terms manually in the mX matrix, but my question is whether we can do this via vgxset in a quicker way. I think not based on my reading of the help file, but still want to double check). Thanks

Efficiently calculate mean value from files

From a Monte-Carlo simulation I have a range of files, say: file_1.mat, file_2.mat,...,file_n.mat, where n is large. Each file contains one or several (maximum 3 if it matters) large 1D arrays in time of interest, say var1, var2, var3.
I am now as always interested in finding the mean value of these variables. My question is now, how do I do this in the most efficient way? The keyword here is efficiency. Below you will find the MWE which is done the standard way, but this is quite time consuming as the files are large and there are many.
I am programming in Matlab, however ideas presented in pseudo code is also very well received.
MWE:(The standard way)
meanVar1 = zeros(1,1e6); %I do not remember the exact size, just use 1e6
meanVar2 = zeros(1,1e6);
meanVar3 = zeros(1,1e6);
for i 1=1:n
load(strcat('file_',int2str(i)),'var1','var2','var3')
meanVar1 = meanVar1 + var1;
meanVar2 = meanVar2 + var2;
meanVar3 = meanVar3 + var3;
end
meanVar1 = meanVar1/n;
meanVar2 = meanVar2/n;
meanVar3 = meanVar3/n;

Trying to balance my dataset through sample_weight in scikit-learn

I'm using RandomForest for classification, and I got an unbalanced dataset, as: 5830-no, 1006-yes. I try to balance my dataset with class_weight and sample_weight, but I can`t.
My code is:
X_train,X_test,y_train,y_test = train_test_split(arrX,y,test_size=0.25)
cw='auto'
clf=RandomForestClassifier(class_weight=cw)
param_grid = { 'n_estimators': [10,50,100,200,300],'max_features': ['auto', 'sqrt', 'log2']}
sw = np.array([1 if i == 0 else 8 for i in y_train])
CV_clf = GridSearchCV(estimator=clf, param_grid=param_grid, cv= 10,fit_params={'sample_weight': sw})
But I don't get any improvement on my ratios TPR, FPR, ROC when using class_weight and sample_weight.
Why? Am I doing anything wrong?
Nevertheless, if I use the function called balanced_subsample, my ratios obtain a great improvement:
def balanced_subsample(x,y,subsample_size):
class_xs = []
min_elems = None
for yi in np.unique(y):
elems = x[(y == yi)]
class_xs.append((yi, elems))
if min_elems == None or elems.shape[0] < min_elems:
min_elems = elems.shape[0]
use_elems = min_elems
if subsample_size < 1:
use_elems = int(min_elems*subsample_size)
xs = []
ys = []
for ci,this_xs in class_xs:
if len(this_xs) > use_elems:
np.random.shuffle(this_xs)
x_ = this_xs[:use_elems]
y_ = np.empty(use_elems)
y_.fill(ci)
xs.append(x_)
ys.append(y_)
xs = np.concatenate(xs)
ys = np.concatenate(ys)
return xs,ys
My new code is:
X_train_subsampled,y_train_subsampled=balanced_subsample(arrX,y,0.5)
X_train,X_test,y_train,y_test = train_test_split(X_train_subsampled,y_train_subsampled,test_size=0.25)
cw='auto'
clf=RandomForestClassifier(class_weight=cw)
param_grid = { 'n_estimators': [10,50,100,200,300],'max_features': ['auto', 'sqrt', 'log2']}
sw = np.array([1 if i == 0 else 8 for i in y_train])
CV_clf = GridSearchCV(estimator=clf, param_grid=param_grid, cv= 10,fit_params={'sample_weight': sw})
This is not a full answer yet, but hopefully it'll help get there.
First some general remarks:
To debug this kind of issue it is often useful to have a deterministic behavior. You can pass the random_state attribute to RandomForestClassifier and various scikit-learn objects that have inherent randomness to get the same result on every run. You'll also need:
import numpy as np
np.random.seed()
import random
random.seed()
for your balanced_subsample function to behave the same way on every run.
Don't grid search on n_estimators: more trees is always better in a random forest.
Note that sample_weight and class_weight have a similar objective: actual sample weights will be sample_weight * weights inferred from class_weight.
Could you try:
Using subsample=1 in your balanced_subsample function. Unless there's a particular reason not to do so we're better off comparing the results on similar number of samples.
Using your subsampling strategy with class_weight and sample_weight both set to None.
EDIT: Reading your comment again I realize your results are not so surprising!
You get a better (higher) TPR but a worse (higher) FPR.
It just means your classifier tries hard to get the samples from class 1 right, and thus makes more false positives (while also getting more of those right of course!).
You will see this trend continue if you keep increasing the class/sample weights in the same direction.
There is a imbalanced-learn API that helps with oversampling/undersampling data that might be useful in this situation. You can pass your training set into one of the methods and it will output the oversampled data for you. See simple example below
from imblearn.over_sampling import RandomOverSampler
ros = RandomOverSampler(random_state=1)
x_oversampled, y_oversampled = ros.fit_sample(orig_x_data, orig_y_data)
Here it the link to the API: http://contrib.scikit-learn.org/imbalanced-learn/api.html
Hope this helps!

loop through structures and finding the correlation

The following example resembles a similar problem that I'm dealing with, although the code below is merely an example, it is structured in the same format as my actual data set.
clear all
England = struct('AirT',rand(320,1),'SolRad',rand(320,1),'Rain',rand(320,1));
Wales = struct('AirT',rand(320,1),'SolRad',rand(320,1),'Rain',rand(320,1));
Ireland = struct('AirT',rand(320,1),'SolRad',rand(320,1),'Rain',rand(320,1));
Scotland = struct('AirT',rand(320,1),'SolRad',rand(320,1),'Rain',rand(320,1));
Location = struct('England',England,'Wales', Wales, 'Ireland',Ireland,'Scotland',Scotland);
FieldName={'England','Wales','Scotland','Ireland'};
Data = {England.AirT,Wales.AirT,Scotland.AirT,Ireland.AirT};
Data = [FieldName;Data];
Data = struct(Data{:});
Data = cell2mat(struct2cell(Data)');
[R,P] = corrcoef(Data,'rows','pairwise');
R_Value= [FieldName(nchoosek(1:size(R,1),2)) num2cell(nonzeros(tril(R,-1)))];
So, this script would show the correlation between pairs of Air Temperature of 4 locations. I'm looking for a way of also looking at the correlation between 'SolRad' and 'Rain' between the locations (same process as for AirT) or any variables denoted in the structure. I could do this by replacing the inputs into 'Data' but this seems rather long winded especially when involving many different variables. Any ideas on how to do this? I've tried using a loop but it seems harder than I though to try and get the data into the same format as the example.
Let's see if this helps, or is what you are thinking:
clear all
England = struct('AirT',rand(320,1),'SolRad',rand(320,1),'Rain',rand(320,1));
Wales = struct('AirT',rand(320,1),'SolRad',rand(320,1),'Rain',rand(320,1));
Ireland = struct('AirT',rand(320,1),'SolRad',rand(320,1),'Rain',rand(320,1));
Scotland = struct('AirT',rand(320,1),'SolRad',rand(320,1),'Rain',rand(320,1));
Location = struct('England',England,'Wales', Wales, 'Ireland',Ireland,'Scotland',Scotland);
% get all the location fields
FieldName = transpose(fieldnames(Location));
% get the variables recorded at the first location
CorrData = fieldnames(Location.(FieldName{1}));
% get variables which were stored at all locations(just to be safe,
% we know that they are all the same)
for ii=2:length(FieldName)
CorrData = intersect(CorrData,fieldnames(Location.(FieldName{ii})));
end
% process each variable that was recorded
for ii=1:length(CorrData)
Data = cell(1,length(FieldName));
% get the variable data from each location and store in Data
for jj=1:length(FieldName)
Data{jj} = Location.(FieldName{jj}).(CorrData{ii});
end
% process the data
Data = [FieldName;Data];
Data = struct(Data{:});
Data = cell2mat(struct2cell(Data)');
[R,P] = corrcoef(Data,'rows','pairwise');
R_Value= [FieldName(nchoosek(1:size(R,1),2)) num2cell(nonzeros(tril(R,-1)))];
% display the data, sounds good right?
fprintf(1,'Correlation for %s\n',CorrData{ii});
for jj=1:size(R_Value,1)
fprintf(1,'%s\t%s\t%f\n',R_Value{jj,1},R_Value{jj,2},R_Value{jj,3});
end
end
Let me know if I misunderstood, or if this is more involved than what you were thinking. Thanks!
fieldnames(s) and dynamic field references are your friend.
What I would suggest is to make one structure in which 'name' is a field, and the other fields are whatever you'd like. Regardless of how you set up your structure s, you can use fn = fieldnames(s); to return a cell array of the fields. You can access the contents of your structure using these names by using parentheses around the variable containing the name.
fn = fieldnames(s);
for i=1:length(fn)
disp([fn{i} ':' s.(fn{i})]
end
Whatever you do with the values is up to you, of course!

IDL and MatLab getting strange values from NetCDF file

I have a NetCDF file, which contains data representing total precipitation across the globe over several months (so it's stored in a three dimensional array). I first ensured that the data was sensible, and the way it was formed, both in XConv and ncdump. All looks sensible - values vary from very small (~10^-10 - this makes sense, as this is model data, and effectively represents zero) to about 5x10^-3.
The problems start when I try to handle this data in IDL or MatLab. The arrays generated in these programs are full of huge negative numbers such as -4x10^4, with occasional huge positive numbers, such as 5000. Strangely, looking at a plot of the data in MatLab with respect to latitude and longitude (at a specific time), the pattern of rainfall looks sensible, but the values are just completely wrong.
In IDL, I'm reading the file in to write it to a text file so it can be handled by some software that takes very basic text files. Here's the code I'm using:
PRO nao_heaps
address = '/Users/levyadmin/Downloads/'
file_base = 'output'
ncid = ncdf_open(address + file_base + '.nc')
MONTHS=['january','february','march','april','may','june','july','august','september','october','november','december']
varid_field = ncdf_varid(ncid, "tp")
varid_lon = ncdf_varid(ncid, "longitude")
varid_lat = ncdf_varid(ncid, "latitude")
varid_time = ncdf_varid(ncid, "time")
ncdf_varget,ncid, varid_field, total_precip
ncdf_varget,ncid, varid_lat, lats
ncdf_varget,ncid, varid_lon, lons
ncdf_varget,ncid, varid_time, time
ncdf_close,ncid
lats = reform(lats)
lons = reform(lons)
time = reform(time)
total_precip = reform(total_precip)
total_precip = total_precip*1000. ;put in mm
noLats=(size(lats))(1)
noLons=(size(lons))(1)
noMonths=(size(time))(1)
; the data may not be an integer number of years (otherwise we could make this next loop cleaner)
av_precip=fltarr(noLons,noLats,12)
for month=0, 11 do begin
year = 0
while ( (year*12) + month lt noMonths ) do begin
av_precip(*,*,month) = av_precip(*,*,month) + total_precip(*,*, (year*12)+month )
year++
endwhile
av_precip(*,*,month) = av_precip(*,*,month)/year
endfor
fname = address + file_base + '.dat'
OPENW,1,fname
PRINTF,1,'longitude'
PRINTF,1,lons
PRINTF,1,'latitude'
PRINTF,1,lats
for month=0,11 do begin
PRINTF,1,MONTHS(month)
PRINTF,1,av_precip(*,*,month)
endfor
CLOSE,1
END
Anyone have any ideas why I'm getting such strange values in MatLab and IDL?!
AH! Found the answer. NetCDF files use an offset, and a scale factor for the data to keep the size of the file to a minimum. To get the correct values, I simply need to:
total_precip = offset + (scale_factor * total_precip) ;put into correct range
At present I'm getting the scale factor and offset from ncdump, and hard coding them into my IDL program, but does anyone know how I can get them dynamically in my IDL code..?