Hi everyone and thank you for taking time to read this.
I have the following code to train a neural network:
P = [-1 2 0.5 3];
T1 = 1;
T2 = 2;
T3 = 1.5;
net = newff([-1 3;-1 3;-1 3;-1 3],[2 1],{'logsig' 'logsig'},'traingd');
net.trainParam.epochs = 50;
net.trainParam.lr = 0.6;
%now start to train for the first 50 epochs
[net,Y,E,Pf,Af,tr] = train(net,P',T1)
I want to have error for every epoch. I trained my network for 50 epochs and at the end it gives me the final error but I want but I want all of the errors!
If you know you are going to train for 50 epochs you can set up an array and record the error for each one.
Related
I have neural data collected across 16 different channels. This data was recorded over a 30 second period.
Over a 10s period (from 20 - 30s), I want to record the number of neural data points that are greater than or equal to a specified threshold. I would like to do this according to bins of 0.001s.
I am using MATLAB 2019b.
My code so far looks like this:
t1 = 20;
t2 = 30;
ind1 = find(tim_trl>=t1, 1);
ind2 = find(tim_trl>=t2, 1);
time1 = tim_trl(ind1:ind2); %10s window
sampRate = 24414; %sampling freq (Hz), samples per sec
muaWindow = 0.001; %1ms window
binWidth = round(muaWindow*sampRate); %samples per 1ms window
threshold = 0.018;
for jj = 1:16 %ch
data = AbData(ind1:ind2, jj); %10 sec of data
for kk = 1:10000
abDataBin = data(1:binWidth,jj); %data in 1 bin
dataThreshold = find(abDataBin >= threshold); %find data points >= threshold
mua(kk,jj) = sum(dataThreshold); %number of data pts over threshold per ch
end
end
So far, I'm just having a bit of trouble at this point:
abDataBin = data(1:binWidth,jj); %data in 1 bin
When I run the loop, the data in bin 1 gets overwritten, rather than shift to bin 2, 3...10000. I'd appreciate any feedback on fixing this.
Many thanks.
You forgot to use the running variable as index to access your data. Try
% create data with 16 channels
AbData = rand(10000,16);
binWidth = 24;
threshold = 0.001;
for channel=1:16
data = AbData(2001:3000,channel);
counter = 1; % needed for mua indexing
% looping over the bin starting indeces
for window=1:binWidth:length(data)-(binWidth)
% access data in one bin
bindata = data(window:window+binWidth);
% calculate ms above threshold
mua(counter, channel) = sum(bindata >= threshold);
counter = counter+1;
end
end
EDIT:
your data variable is of dimension nx1, therefore doenst need the column indexing with jj
My dataset is huge. Let X be input training data which is 6X140000 and T be targets, which are 3X140000.
net = patternnet(10);
% Set divide parameters
net.divideFcn = 'divideind';
net.divideParam.trainInd = loc_Train;
net.divideParam.testInd = loc_Test;
net.divideParam.valInd = loc_Valid;
net.trainFcn = 'trainscg';
% Set training parameters
net.trainParam.epochs = 1000;
net.trainParam.max_fail = 20;
net.trainParam.min_grad = 1e-20;
net.trainParam.goal = 1e-10; % Set a very small value
% Set network performance functions
net.performFcn = 'crossentropy';
net.performParam.regularization = 0.02;
net.performParam.normalization = 'none';
net.trainParam.showWindow = 0;
net.trainParam.showCommandLine = 1;
After I have setup my network, I run the following code to train my network.
[net, tr] = train(net, X, T);
The command line shows:
Calculation mode: MEX Training Pattern Recognition Neural Network
with TRAINSCG.
Epoch 0/1000, Time 0.001, Performance 0.0061672/1e-10,
Gradient 0.00065207/1e-20, Validation Checks 0/20
Epoch 20/1000, Time 2.214, Performance 0.0060292/1e-10, Gradient 6.3997e-05/1e-20, Validation Checks 20/20
Training with TRAINSCG completed: Validation
stop.
The tr object, which is the training record, holds information such as testing indices. however, tr.testInd returns empty.
I have the following simulation running in Matlab. For a period of 25 years, it simulates "Assets", which grow according to geometric brownian motion, and "Liabilities", which grow at a fixed rate of 7% each year. At the end of the simulation, I take the ratio of Assets to Liabilities, and the trial is successful if this is greater than 90%.
All inputs are fixed except for Sigma (the standard deviation). My goal is to find the lowest possible value of sigma that will result in a ratio of assets to liabilities > 0.9 for every year.
Is there anything in Matlab designed to solve this kind of optimization problem?
The code below sets up the simulation for a fixed value of sigma.
%set up inputs
nPeriods = 25;
years = 2016:(2016+nPeriods);
rate = Assumptions.Returns;
sigma = 0.15; %This is the input that I want to optimize
dt = 1;
T = nPeriods*dt;
nTrials = 500;
StartAsset = 81.2419;
%calculate fixed liabilities
StartLiab = 86.9590;
Liabilities = zeros(size(years))'
Liabilities(1) = StartLiab
for idx = 2:length(years)
Liabilities(idx) = Liabilities(idx-1)*(1 + Assumptions.Discount)
end
%run simulation
obj = gbm(rate,sigma,'StartState',StartAsset);
%rng(1,'twister');
[X1,T] = simulate(obj,nPeriods,'DeltaTime',dt, 'nTrials', nTrials);
Ratio = zeros(size(X1))
for i = 1:nTrials
Ratio(:,:,i)= X1(:,:,i)./Liabilities;
end
Unsuccessful = Ratio < 0.9
UnsuccessfulCount = sum(sum(Unsuccessful))
First make your simulation a function that takes sigma as the input:
function f = asset(sigma)
%set up inputs
nPeriods = 25;
years = 2016:(2016+nPeriods);
rate = Assumptions.Returns;
%sigma = %##.##; %This is the input of the function that I want to optimize
dt = 1;
T = nPeriods*dt;
nTrials = 500;
StartAsset = 81.2419;
%calculate fixed liabilities
StartLiab = 86.9590;
Liabilities = zeros(size(years))'
Liabilities(1) = StartLiab
for idx = 2:length(years)
Liabilities(idx) = Liabilities(idx-1)*(1 + Assumptions.Discount)
end
%run simulation
obj = gbm(rate,sigma,'StartState',StartAsset);
%rng(1,'twister');
[X1,T] = simulate(obj,nPeriods,'DeltaTime',dt, 'nTrials', nTrials);
Ratio = zeros(size(X1))
for i = 1:nTrials
Ratio(:,:,i)= X1(:,:,i)./Liabilities;
end
Unsuccessful = Ratio < 0.9
UnsuccessfulCount = sum(sum(Unsuccessful))
f = sigma + UnsuccessfulCount
end
Then you can use fminbnd (or fminsearch for multiple inputs) to find the minimized value of sigma.
Sigma1 = 0.001;
Sigma2 = 0.999;
optSigma = fminbnd(asset,Sigma1,Sigma2)
I'm trying to find where are make mistakes. Be very glad if you could help me.
Here is my problem:
In serial the train, from neural network toolbox, function behave in one way but when I put it in a parfor loop everything goes crazy.
>> version
ans =
8.3.0.532 (R2014a)
Here is a function
function per = neuralTr(tSet,Y,CrossVal,Ycv)
hiddenLayerSize = 94;
redeT = patternnet(hiddenLayerSize);
redeT.input.processFcns = {'removeconstantrows','mapminmax'};
redeT.output.processFcns = {'removeconstantrows','mapminmax'};
redeT.divideFcn = 'dividerand'; % Divide data randomly
redeT.divideMode = 'sample'; % Divide up every sample
redeT.divideParam.trainRatio = 80/100;
redeT.divideParam.valRatio = 10/100;
redeT.divideParam.testRatio = 10/100;
redeT.trainFcn = 'trainscg'; % Scaled conjugate gradient
redeT.performFcn = 'crossentropy'; % Cross-entropy
redeT.trainParam.showWindow=0; %default is 1)
redeT = train(redeT,tSet,Y);
outputs = sim(redeT,CrossVal);
per = perform(redeT,Ycv,outputs);
end
And here is the code I'm typing:
Data loaded in workspace
whos
Name Size Bytes Class Attributes
CrossVal 282x157 354192 double
Y 2x363 5808 double
Ycv 2x157 2512 double
per 1x1 8 double
tSet 282x363 818928 double
Function executing in Serial
per = neuralTr(tSet,Y,CrossVal,Ycv)
per =
0.90
Starting parallel
>> parpool local
Starting parallel pool (parpool) using the 'local' profile ... connected to 12 workers.
ans =
Pool with properties:
Connected: true
NumWorkers: 12
Cluster: local
AttachedFiles: {}
IdleTimeout: Inf (no automatic shut down)
SpmdEnabled: true
Initializing and executing the function 12 times in parallel
per = cell(12,1);
parfor ii = 1 : 12
per{ii} = neuralTr(tSet,Y,CrossVal,Ycv);
end
per
per =
[0.96]
[0.83]
[0.92]
[1.08]
[0.85]
[0.89]
[1.06]
[0.83]
[0.90]
[0.93]
[0.95]
[0.81]
Executing again to see if random initialization brings different values
per = cell(12,1);
parfor ii = 1 : 12
per{ii} = neuralTr(tSet,Y,CrossVal,Ycv);
end
per
per =
[0.96]
[0.83]
[0.92]
[1.08]
[0.85]
[0.89]
[1.06]
[0.83]
[0.90]
[0.93]
[0.95]
[0.81]
EDIT 1:
Running the function only with for
per = cell(12,1);
for ii = 1 : 12
per{ii} = neuralTr(tSet,Y,CrossVal,Ycv);
end
per
per =
[0.90]
[0.90]
[0.90]
[0.90]
[0.90]
[0.90]
[0.90]
[0.90]
[0.90]
[0.90]
[0.90]
[0.90]
EDIT 2:
I modified my function now everything works great. Maybe the problem is when data is divided in parallel. So i divided the data before send to parallel. Tks a lot
function per = neuralTr(tSet,Y,CrossVal,Ycv)
indt = 1:round(size(tSet,2) * 0.8) ;
indv = round(size(tSet,2) * 0.8):round(size(tSet,2) * 0.9);
indte = round(size(tSet,2) * 0.9):size(tSet,2);
hiddenLayerSize = 94;
redeT = patternnet(hiddenLayerSize);
redeT.input.processFcns = {'removeconstantrows','mapminmax'};
redeT.output.processFcns = {'removeconstantrows','mapminmax'};
redeT.divideFcn = 'dividerand'; % Divide data randomly
redeT.divideMode = 'sample'; % Divide up every sample
redeT.divideParam.trainRatio = 80/100;
redeT.divideParam.valRatio = 10/100;
redeT.divideParam.testRatio = 10/100;
redeT.trainFcn = 'trainscg'; % Scaled conjugate gradient
redeT.performFcn = 'crossentropy'; % Cross-entropy
redeT.trainParam.showWindow=0; %default is 1)
redeT = train(redeT,tSet,Y);
outputs = sim(redeT,CrossVal);
per = zeros(12,1);
parfor ii = 1 : 12
redes = train(redeT,tSet,Y);
per(ii) = perform(redes,Ycv,outputs);
end
end
Result:
>> per = neuralTr(tSet,Y,CrossVal,Ycv)
per =
0.90
0.90
0.90
0.90
0.90
0.90
0.90
0.90
0.90
0.90
0.90
0.90
Oh! I think I found it, but cant test it.
you have in your code:
redeT.divideFcn = 'dividerand'; % Divide data randomly
If each of the workers chooses the data randomly, then its expected for them to have different results, aren't they?
Try the next:
per = cell(12,1);
parfor ii = 1 : 12
rng(1); % set the seed for random number generation, so every time the number generated will be the same
per{ii} = neuralTr(tSet,Y,CrossVal,Ycv);
end
per
Not sure if neuralTr does set the seed inside, but give it a go.
I have one set of original image patches (101x101 matrices) and another corresponding set of image patches (same size 101x101) in binary which are the 'answer' for training the neural network. I wanted to train my neural network so that it can learn, recognize the shape that it's trained from the given image, and produce the image (in same matrix form 150x10201 maybe?) at the output matrix (as a result of segmentation).
Original image is on the left and the desired output is on the right.
So, as for pre-processing stage of the data, I reshaped the original image patches into vector matrices of 1x10201 for each image patch. Combining 150 of them i get a 150x10201 matrix as my input, and another 150x10201 matrix from the binary image patches. Then I provide these input data into the deep learning network. I used Deep Belief Network in this case.
My Matlab code for setup and train DBN as below:
%train a 4 layers 100 hidden unit DBN and use its weights to initialize a NN
rand('state',0)
%train dbn
dbn.sizes = [100 100 100 100];
opts.numepochs = 5;
opts.batchsize = 10;
opts.momentum = 0;
opts.alpha = 1;
dbn = dbnsetup(dbn, train_x, opts);
dbn = dbntrain(dbn, train_x, opts);
%unfold dbn to nn
nn = dbnunfoldtonn(dbn, 10201);
nn.activation_function = 'sigm';
%train nn
opts.numepochs = 1;
opts.batchsize = 10;
assert(isfloat(train_x), 'train_x must be a float');
assert(nargin == 4 || nargin == 6,'number ofinput arguments must be 4 or 6')
loss.train.e = [];
loss.train.e_frac = [];
loss.val.e = [];
loss.val.e_frac = [];
opts.validation = 0;
if nargin == 6
opts.validation = 1;
end
fhandle = [];
if isfield(opts,'plot') && opts.plot == 1
fhandle = figure();
end
m = size(train_x, 1);
batchsize = opts.batchsize;
numepochs = opts.numepochs;
numbatches = m / batchsize;
assert(rem(numbatches, 1) == 0, 'numbatches must be a integer');
L = zeros(numepochs*numbatches,1);
n = 1;
for i = 1 : numepochs
tic;
kk = randperm(m);
for l = 1 : numbatches
batch_x = train_x(kk((l - 1) * batchsize + 1 : l * batchsize), :);
%Add noise to input (for use in denoising autoencoder)
if(nn.inputZeroMaskedFraction ~= 0)
batch_x = batch_x.*(rand(size(batch_x))>nn.inputZeroMaskedFraction);
end
batch_y = train_y(kk((l - 1) * batchsize + 1 : l * batchsize), :);
nn = nnff(nn, batch_x, batch_y);
nn = nnbp(nn);
nn = nnapplygrads(nn);
L(n) = nn.L;
n = n + 1;
end
t = toc;
if opts.validation == 1
loss = nneval(nn, loss, train_x, train_y, val_x, val_y);
str_perf = sprintf('; Full-batch train mse = %f, val mse = %f',
loss.train.e(end), loss.val.e(end));
else
loss = nneval(nn, loss, train_x, train_y);
str_perf = sprintf('; Full-batch train err = %f', loss.train.e(end));
end
if ishandle(fhandle)
nnupdatefigures(nn, fhandle, loss, opts, i);
end
disp(['epoch ' num2str(i) '/' num2str(opts.numepochs) '. Took ' num2str(t) ' seconds' '. Mini-batch mean squared error on training set is ' num2str(mean(L((n-numbatches):(n-1)))) str_perf]);
nn.learningRate = nn.learningRate * nn.scaling_learningRate;
end
Can anyone let me know, is the training for the NN like this enable it to do the segmentation work? or how should I modify the code to train the NN so that it can generate the output/result as the image matrix in 150x10201 form?
Thank you so much..
Your inputs are TOO big. You should try to work with smaller patches from 19x19 to maximum 30x30 (which already represent 900 neurons into the input layer).
Then comes your main problem: you only have 150 images! And when you train a NN, you need at least three times more training instances than weights into your NN. So be very careful to the architecture you choose.
A CNN may be more adapted to your problem.