xlswrite overwrite existing file - matlab

I have 53 xls table (ch_1, ch_2, ...) and then use them as a input for Neural Network. After that write the NN result in a new xls and csv.
clc
clear all
files=dir('*.xls');
for i=1:length(files(:,1))
aa=xlsread(files(i).name);
fprintf('step: %d\n', i);
datanameXls = ['channel_' num2str(i) '.xls'];
datanameCsv = ['channel_' num2str(i) '.csv'];
a17=aa(:,1);
b17=aa(:,4);
p=size(a17);
c17=zeros(144,31);
% Create a Fitting Network
hiddenLayerSize = 10;
net = fitnet(hiddenLayerSize);
% Setup Division of Data for Training, Validation, Testing
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% Train the Network
[net,tr] = train(net,inputs,targets);
% Test the Network
outputs = net(inputs);
A= zeros(4464, 2);
A = [o, outputs'];
A(A<0)=0;
csvwrite(datanameCsv, A);
fprintf('csv is written \n');
xlswrite(datanameXls, A);
fprintf('xls is written \n');
end
The problem is: when i try this programm with one, two till 9 table, the Result which i save through xlswrite are true, but when i try it with 52 table, i get a false table because for example ch_1 is overwritten whith ch_10.
Any IDEA???

I solved my problem. 'dir' read first ch_10 to ch_19 then ch_1. I did rename all my files and it works now correctly.I did the following to rename all files:
clc
clear all
files = dir('*.xls');
for k=1:length(files(:,1))
oldFileName = sprintf('ch_%dMonth_1.xls',k);
newFileName = sprintf('%03d.xls',k);
movefile(oldFileName,newFileName);
end

Related

Time Series Forecasting Using Deep Learning in MATLAB

I am using the time series forecasting sample from MathWorks in https://uk.mathworks.com/help/nnet/examples/time-series-forecasting-using-deep-learning.html
The output in the above-mentioned web-address is:
I only changed the dataset and ran the algorithm. Surprisingly, the algorithm is not working good with my dataset and generates a line as forecast as follows:
I am really confused and I cannot understand the reason behind that. I might be need to tune parameters in the algorithm that I am not aware on that. The code I am using is:
%% Load Data
%data = chickenpox_dataset;
%data = [data{:}];
data = xlsread('data.xlsx');
data = data';
%% Divide Data: Training and Testing
numTimeStepsTrain = floor(0.7*numel(data));
XTrain = data(1:numTimeStepsTrain);
YTrain = data(2:numTimeStepsTrain+1);
XTest = data(numTimeStepsTrain+1:end-1);
YTest = data(numTimeStepsTrain+2:end);
%% Standardize Data
mu = mean(XTrain);
sig = std(XTrain);
XTrain = (XTrain - mu) / sig;
YTrain = (YTrain - mu) / sig;
XTest = (XTest - mu) / sig;
%% Define LSTM Network
inputSize = 1;
numResponses = 1;
numHiddenUnits = 500;
layers = [ ...
sequenceInputLayer(inputSize)
lstmLayer(numHiddenUnits)
fullyConnectedLayer(numResponses)
regressionLayer];
%% Training Options
opts = trainingOptions('adam', ...
'MaxEpochs',500, ...
'GradientThreshold',1, ...
'InitialLearnRate',0.005, ...
'LearnRateSchedule','piecewise', ...
'LearnRateDropPeriod',125, ...
'LearnRateDropFactor',0.2, ...
'Verbose',0, ...
'Plots','training-progress');
%% Train Network
net = trainNetwork(XTrain,YTrain,layers,opts);
%% Forecast Future Time Steps
net = predictAndUpdateState(net,XTrain);
[net,YPred] = predictAndUpdateState(net,YTrain(end));
numTimeStepsTest = numel(XTest);
for i = 2:numTimeStepsTest
[net,YPred(1,i)] = predictAndUpdateState(net,YPred(i-1));
end
%% Unstandardize the predictions using mu and sig calculated earlier.
YPred = sig*YPred + mu;
%% RMSE and MAE Calculation
rmse = sqrt(mean((YPred-YTest).^2))
MAE = mae(YPred-YTest)
%% Plot results
figure
plot(data(1:numTimeStepsTrain))
hold on
idx = numTimeStepsTrain:(numTimeStepsTrain+numTimeStepsTest);
plot(idx,[data(numTimeStepsTrain) YPred],'.-')
hold off
xlabel("Month")
ylabel("Cases")
title("Forecast")
legend(["Observed" "Forecast"])
%% Compare the forecasted values with the test data
figure
subplot(2,1,1)
plot(YTest)
hold on
plot(YPred,'.-')
hold off
legend(["Observed" "Forecast"])
ylabel("Cases")
title("Forecast")
subplot(2,1,2)
stem(YPred - YTest)
xlabel("Month")
ylabel("Error")
title("RMSE = " + rmse)
And the data.xlsx is in: https://www.dropbox.com/s/vv1apug7iqlocu1/data.xlsx?dl=1
You want to find temporal patterns in the data. Matlab's data looks like a sine-wave with noise, a very clear pattern. Your data is far from showing a clear pattern. Your data needs preprocessing. I would start by removing the slow drifts. A high-pass, or band-pass filter of some sort makes sense. Here is a simple line just for a quick view of your data without the slow frequencies:
T=readtable('data.xlsx','readvariablenames',0);
figure; plot(T.Var1-smoothdata(T.Var1,'movmean',200))

Speedup processing of larger binary files

I have to process thousands of binary files (each of 16MB) by reading pairs of them and creating a bit-level data structure (usually a 1x134217728 array) in order to process them on bit level.
Currently I am doing this the following way:
conv = #(c) uint8(bitget(c,1:32));
measurement = NaN(1,(sizeOfMeasurements*8)) %(1,134217728)
fid = fopen(fileName, 'rb');
byteContent = fread(fid,'uint32');
fclose(fid);
bitRepresentation1 = arrayfun(conv, byteContent, 'UniformOutput', false);
measurement=[bitRepresentation1{:}];
Thus, I replaced fopen with memmapfile as below:
m = memmapfile(fileName,'Format',{'uint32', [4194304 1], 'byteContent'});
byteContent = m.data.byteContent;
byteContent = double(byteContent);
I printed timing information (using tic/toc) for the individual instructions and it turns out that the bottleneck is:
bitRepresentation1 = arrayfun(conv, byteContent, 'UniformOutput', false); % see first line of code for conv
Are there more efficient ways of transforming byteContent into an array that stores a bit per index (i.e. that is a bit representation of byteContent)?
Let looping over all numbers be handled by bitget. You loop over the bits:
fid = fopen(fileName, 'rb');
bitContent = fread(fid,'*ubit64');
fclose(fid);
conv = #(ii) uint8(bitget(bitContent, ii));
bitRepresentation = arrayfun(conv, 1:64, 'UniformOutput', false);
measurement = [bitRepresentation{:}]';
measurement = measurement(:).';
EDIT you can also try a direct loop:
fid = fopen(fileName, 'rb');
bitContent = fread(fid,'*ubit64');
fclose(fid);
sz = 64 * size(bitContent,1);
measurement3 = zeros(1, sz, 'uint8');
weave = 1:64:sz;
for ii = 1:64
measurement3(weave + ii - 1) = uint8(bitget(bitContent, ii)); end
but on my system, that is (surprisingly) slower than arrayfun...but, my MATLAB version is from the stone age, your mileage may be different. Give it a try
Several things that seem to provide further improvement on Rody's suggestion:
(minor:) Using a local function instead of a function handle for conv.
(major:) Converting the result of conv to logical using ~~ instead of uint8.
(major:) cell2mat instead of [bitRepresentation{:}]'.
The result:
function q40863898(filename)
fid = fopen(filename, 'rb');
bitContent = fread(fid,'*ubit64');
fclose(fid);
bitRepresentation = arrayfun(#convert, 1:64, 'UniformOutput', false);
measurement = reshape(cell2mat(bitRepresentation).',[],1).';
function out = convert(ii)
out = ~~(bitget(bitContent, ii, 'uint64'));
end
end
Benchmark result (on MATLAB R2016b, Win10 x64, 14MB file):
Rody's vectorized method: 0.87783
Rody's loop method: 2.37
Dev-iL's method: 0.68387
Benchmark code:
function q40863898(filename)
%% Common code:
fid = fopen(filename, 'rb');
bitContent = fread(fid,'*ubit64');
fclose(fid);
%% Verification:
ref = Rody1();
res = {Rody2(), uint8(Devil1())};
assert(isequal(ref,res{1}));
assert(isequal(ref,res{2}));
%% Benchmark:
disp(['Rody''s vectorized method: ' num2str(timeit(#Rody1))]);
disp(['Rody''s loop method: ' num2str(timeit(#Rody2))]);
disp(['Dev-iL''s method: ' num2str(timeit(#Devil1))]);
%% Functions:
function measurement = Rody1()
conv = #(ii) uint8(bitget(bitContent, ii));
bitRepresentation = arrayfun(conv, 1:64, 'UniformOutput', false);
measurement = [bitRepresentation{:}]';
measurement = measurement(:).';
end
function measurement = Rody2()
sz = 64 * size(bitContent,1);
measurement = zeros(1, sz, 'uint8');
weave = 1:64:sz;
for ii = 1:64
measurement(weave + ii - 1) = uint8(bitget(bitContent, ii));
end
end
function measurement = Devil1()
bitRepresentation = arrayfun(#convert, 1:64, 'UniformOutput', false);
measurement = reshape(cell2mat(bitRepresentation).',[],1).';
function out = convert(ii)
out = ~~(bitget(bitContent, ii, 'uint64'));
end
end
end

MATLAB nntraintool not working

I want to train my recognition system with the feature dataset that I have created.
There are 9 inputs (features) and 40 targets (classes)
inputs and targets are on the same matrix. The first 9 columns are the features and the remaining 40 columns are the outputs
I have written this code below:
load dataSet.mat;
inputs = featureSet(:,1:9)';
targets = featureSet(:,10:49)';
% Create a Pattern Recognition Network
hiddenLayerSize = ns;
net = patternnet(hiddenLayerSize);
net.divideParam.trainRatio = trRa/100;
net.divideParam.valRatio = vaRa/100;
net.divideParam.testRatio = teRa/100;
% Train the Network
[net,tr] = train(net,inputs,targets);
% Test the Network
outputs = net(inputs);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs);
% Recalculate Training, Validation and Test Performance
trainTargets = targets .* tr.trainMask{1};
valTargets = targets .* tr.valMask{1};
testTargets = targets .* tr.testMask{1};
trainPerformance = perform(net,trainTargets,outputs);
valPerformance = perform(net,valTargets,outputs);
testPerformance = perform(net,testTargets,outputs);
I have copied this code from my previous working character recognition application. There is only two differences from the former application: input count and target count.
After starting training GUI window of nntraintool opens and no action is observed. Additionally, I get message at the bottom says "Performance goal met."
What could be the reason?

MATLAB: One Step Ahead Neural Network Timeseries Forecast

Intro: I'm using MATLAB's Neural Network Toolbox in an attempt to forecast time series one step into the future. Currently I'm just trying to forecast a simple sinusoidal function, but hopefully I will be able to move on to something a bit more complex after I obtain satisfactory results.
Problem: Everything seems to work fine, however the predicted forecast tends to be lagged by one period. Neural network forecasting isn't much use if it just outputs the series delayed by one unit of time, right?
Code:
t = -50:0.2:100;
noise = rand(1,length(t));
y = sin(t)+1/2*sin(t+pi/3);
split = floor(0.9*length(t));
forperiod = length(t)-split;
numinputs = 5;
forecasted = [];
msg = '';
for j = 1:forperiod
fprintf(repmat('\b',1,numel(msg)));
msg = sprintf('forecasting iteration %g/%g...\n',j,forperiod);
fprintf('%s',msg);
estdata = y(1:split+j-1);
estdatalen = size(estdata,2);
signal = estdata;
last = signal(end);
[signal,low,high] = preprocess(signal'); % pre-process
signal = signal';
inputs = signal(rowshiftmat(length(signal),numinputs));
targets = signal(numinputs+1:end);
%% NARNET METHOD
feedbackDelays = 1:4;
hiddenLayerSize = 10;
net = narnet(feedbackDelays,[hiddenLayerSize hiddenLayerSize]);
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
signalcells = mat2cell(signal,[1],ones(1,length(signal)));
[inputs,inputStates,layerStates,targets] = preparets(net,{},{},signalcells);
net.trainParam.showWindow = false;
net.trainparam.showCommandLine = false;
net.trainFcn = 'trainlm'; % Levenberg-Marquardt
net.performFcn = 'mse'; % Mean squared error
[net,tr] = train(net,inputs,targets,inputStates,layerStates);
next = net(inputs(end),inputStates,layerStates);
next = postprocess(next{1}, low, high); % post-process
next = (next+1)*last;
forecasted = [forecasted next];
end
figure(1);
plot(1:forperiod, forecasted, 'b', 1:forperiod, y(end-forperiod+1:end), 'r');
grid on;
Note:
The function 'preprocess' simply converts the data into logged % differences and 'postprocess' converts the logged % differences back for plotting. (Check EDIT for preprocess and postprocess code)
Results:
BLUE: Forecasted Values
RED: Actual Values
Can anyone tell me what I'm doing wrong here? Or perhaps recommend another method to achieve the desired results (lagless prediction of sinusoidal function, and eventually more chaotic timeseries)? Your help is very much appreciated.
EDIT:
It's been a few days now and I hope everyone has enjoyed their weekend. Since no solutions have emerged I've decided to post the code for the helper functions 'postprocess.m', 'preprocess.m', and their helper function 'normalize.m'. Maybe this will help get the ball rollin.
postprocess.m:
function data = postprocess(x, low, high)
% denormalize
logdata = (x+1)/2*(high-low)+low;
% inverse log data
sign = logdata./abs(logdata);
data = sign.*(exp(abs(logdata))-1);
end
preprocess.m:
function [y, low, high] = preprocess(x)
% differencing
diffs = diff(x);
% calc % changes
chngs = diffs./x(1:end-1,:);
% log data
sign = chngs./abs(chngs);
logdata = sign.*log(abs(chngs)+1);
% normalize logrets
high = max(max(logdata));
low = min(min(logdata));
y=[];
for i = 1:size(logdata,2)
y = [y normalize(logdata(:,i), -1, 1)];
end
end
normalize.m:
function Y = normalize(X,low,high)
%NORMALIZE Linear normalization of X between low and high values.
if length(X) <= 1
error('Length of X input vector must be greater than 1.');
end
mi = min(X);
ma = max(X);
Y = (X-mi)/(ma-mi)*(high-low)+low;
end
I didn't check you code, but made a similar test to predict sin() with NN. The result seems reasonable, without a lag. I think, your bug is somewhere in synchronization of predicted values with actual values.
Here is the code:
%% init & params
t = (-50 : 0.2 : 100)';
y = sin(t) + 0.5 * sin(t + pi / 3);
sigma = 0.2;
n_lags = 12;
hidden_layer_size = 15;
%% create net
net = fitnet(hidden_layer_size);
%% train
noise = sigma * randn(size(t));
y_train = y + noise;
out = circshift(y_train, -1);
out(end) = nan;
in = lagged_input(y_train, n_lags);
net = train(net, in', out');
%% test
noise = sigma * randn(size(t)); % new noise
y_test = y + noise;
in_test = lagged_input(y_test, n_lags);
out_test = net(in_test')';
y_test_predicted = circshift(out_test, 1); % sync with actual value
y_test_predicted(1) = nan;
%% plot
figure,
plot(t, [y, y_test, y_test_predicted], 'linewidth', 2);
grid minor; legend('orig', 'noised', 'predicted')
and the lagged_input() function:
function in = lagged_input(in, n_lags)
for k = 2 : n_lags
in = cat(2, in, circshift(in(:, end), 1));
in(1, k) = nan;
end
end

Equation that compute a Neural Network in Matlab

I created a neural network matlab. This is the script:
load dati.mat;
inputs=dati(:,1:8)';
targets=dati(:,9)';
hiddenLayerSize = 10;
net = patternnet(hiddenLayerSize);
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax', 'mapstd','processpca'};
net.outputs{2}.processFcns = {'removeconstantrows','mapminmax', 'mapstd','processpca'};
net = struct(net);
net.inputs{1}.processParams{2}.ymin = 0;
net.inputs{1}.processParams{4}.maxfrac = 0.02;
net.outputs{2}.processParams{4}.maxfrac = 0.02;
net.outputs{2}.processParams{2}.ymin = 0;
net = network(net);
net.divideFcn = 'divideind';
net.divideMode = 'sample'; % Divide up every sample
net.divideParam.trainInd = 1:428;
net.divideParam.valInd = 429:520;
net.divideParam.testInd = 521:612;
net.trainFcn = 'trainscg'; % Scaled conjugate gradient backpropagation
net.performFcn = 'mse'; % Mean squared error
net.plotFcns = {'plotperform','plottrainstate','ploterrhist', 'plotregression', 'plotconfusion', 'plotroc'};
net=init(net);
net.trainParam.max_fail=20;
[net,tr] = train(net,inputs,targets);
outputs = net(inputs);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)
Now I want to save the weights and biases of the network and write the equation.
I had saved the weights and biases:
W1=net.IW{1,1};
W2=net.LW{2,1};
b1=net.b{1,1};
b2=net.b{2,1};
So, I've done the data preprocessing and I wrote the following equation
max_range=0;
[y,ps]=removeconstantrows(input, max_range);
ymin=0;
ymax=1;
[y,ps2]=mapminmax(y,ymin,ymax);
ymean=0;
ystd=1;
y=mapstd(x,ymean,ystd);
maxfrac=0.02;
y=processpca(y,maxfrac);
in=y';
uscita=tansig(W2*(tansig(W1*in+b1))+b2);
But with the same input input=[1:8] I get different results. why? What's wrong?
Help me please! It's important!
I use Matlab R2010B
It looks like you are pre-processing the inputs but not post-processing the outputs. Post processing uses the "reverse" processing form. (Targets are pre-processed, so outputs are reverse processed).
This equation
uscita=tansig(W2*(tansig(W1*in+b1))+b2);
is wrong. Why do you write two tansig? You have 10 nerouns you should write it 10 times or use for i=1:10;