Calculate convolutional layer in CNN implementation - matlab

I am trying to train a Convolutional Neural Network using Sparse autoenconders in order to compute the filters for the convolution layer. I am using UFLDL code in order to construct patches and to train the CNN network. My code is the following:
===========================================================================
imageDim = 30; % image dimension
imageChannels = 3; % number of channels (rgb, so 3)
patchDim = 10; % patch dimension
numPatches = 100000; % number of patches
visibleSize = patchDim * patchDim * imageChannels; % number of input units
outputSize = visibleSize; % number of output units
hiddenSize = 400; % number of hidden units
epsilon = 0.1; % epsilon for ZCA whitening
poolDim = 10; % dimension of pooling region
optTheta = zeros(2*hiddenSize*visibleSize+hiddenSize+visibleSize, 1);
ZCAWhite = zeros(visibleSize, visibleSize);
meanPatch = zeros(visibleSize, 1);
load patches_16_1
===========================================================================
% Display and check to see that the features look good
W = reshape(optTheta(1:visibleSize * hiddenSize), hiddenSize, visibleSize);
b = optTheta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);
displayColorNetwork( (W*ZCAWhite));
stepSize = 100;
assert(mod(hiddenSize, stepSize) == 0, stepSize should divide hiddenSize);
load train.mat % loads numTrainImages, trainImages, trainLabels
load train.mat % loads numTestImages, testImages, testLabels
% size 30x30x3x8862
numTestImages = 8862;
numTrainImages = 8862;
pooledFeaturesTrain = zeros(hiddenSize, numTrainImages, floor((imageDim - patchDim + 1) / poolDim), floor((imageDim - patchDim + 1) / poolDim) );
pooledFeaturesTest = zeros(hiddenSize, numTestImages, ...
floor((imageDim - patchDim + 1) / poolDim), ...
floor((imageDim - patchDim + 1) / poolDim) );
tic();
testImages = trainImages;
for convPart = 1:(hiddenSize / stepSize)
featureStart = (convPart - 1) * stepSize + 1;
featureEnd = convPart * stepSize;
fprintf('Step %d: features %d to %d\n', convPart, featureStart, featureEnd);
Wt = W(featureStart:featureEnd, :);
bt = b(featureStart:featureEnd);
fprintf('Convolving and pooling train images\n');
convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ...
trainImages, Wt, bt, ZCAWhite, meanPatch);
pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis);
pooledFeaturesTrain(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;
toc();
clear convolvedFeaturesThis pooledFeaturesThis;
fprintf('Convolving and pooling test images\n');
convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ...
testImages, Wt, bt, ZCAWhite, meanPatch);
pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis);
pooledFeaturesTest(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;
toc();
clear convolvedFeaturesThis pooledFeaturesThis;
end
I have problems calculating the convolution and pooling layers. I am getting pooledFeaturesTrain(featureStart:featureEnd, :, :, :) = pooledFeaturesThis; subscripted assignment dimension mismatch. The pathces have normally calculated and they are:
I am trying to understand what exactly the convPart variable is doing and what pooledFeaturesThis. Secondly I notice that my problem is a mismatch in this line pooledFeaturesTrain(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;
where I got the message that the variables is mismatching. THe size of pooledFeaturesThis is 100x3x2x2 where the size of pooledFeaturesTrain is 400x8862x2x2. What exactly pooledFeaturesTrain represents? Is the 2x2 result for every filter? CnnConvolve could be found here :
EDIT: I have changed a little bit my code and it works. However I a little bit concerned about the comprehension of the code.

Ok so in this line you are setting the pooling region.
poolDim = 10; % dimension of pooling region
This part means that for each kernel in each layer you are taking the image and pooling and area of 10x10 pixels. From your code it looks like you are applying a mean function, which means that it a patch and computes the mean and outputs this in the next layer... aka, takes the image from say 100x100 to 10x10. In your network you are repeating convolution+pooling until you get down to a 2x2 image, based on this output (btw, this is not generally good practice in my experience).
400x8862x2x2
Anyways back to your code. Notice that at the beginning of your training you do the following initialization:
pooledFeaturesTrain = zeros(hiddenSize, numTrainImages, floor((imageDim - patchDim + 1) / poolDim), floor((imageDim - patchDim + 1) / poolDim) );
So your error is quite simple and correct - the size of the matrix which holds the output of the convolution+pooling is not the size of the matrix you initialized.
The question is now how to fix it. I supposed a lazy man's way to fix it is to take out the initialization. It will drastically slow down your code, and is not guaranteed to work if you have more than 1 layer.
I suggest you instead have pooledFeaturesTrain be a struct of 3 dimensional array. So instead of this
pooledFeaturesTrain(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;
you'd do something more along the lines of this:
pooledFeaturesTrain{n}(:, :, :) = pooledFeaturesThis;
where n is the current layer.
CNN nets aren't as easy as they're cracked up to be- and even when they don't crash getting them to train well is a feat. I highly suggest reading up on the theory of CNNs - it will make coding and debugging much easier.
Good luck with it ! :)

Related

Good example for comparison between Guided and Gaussian filters

I am looking for a good example that showing the different between Guided and Gaussian Filters. The example need to show the benefit of Guided filter, (for example: preserving edge...). Could you give me some example for that task? Thanks in advance
I tried some example, but it did not show benefit of Guided comparison with Gaussian
% example: edge-preserving smoothing
% figure 1 in our paper
close all;
I = double(imread('.\img_smoothing\cat.bmp')) / 255;
I = imnoise(I,'gaussian',0.1,0);
p = I;
r = 4; % try r=2, 4, or 8
eps = 0.2^2; % try eps=0.1^2, 0.2^2, 0.4^2
q = guidedfilter(I, p, r, eps);
std_Gb=1;
beta=0.1;
%% Initialization
Ng=ceil(3*std_Gb)+1; Gaussian = fspecial('gaussian',[Ng Ng],std_Gb);
imsm = conv2(I,Gaussian,'same');
[Gx,Gy] = gradient(q );
NormGrad = sqrt(Gx.^2 + Gy.^2);
Gb1 = 1./ (1 + 1* NormGrad.^2);
[Gx,Gy] = gradient(imsm);
NormGrad = sqrt(Gx.^2 + Gy.^2);
Gb2= 1./ (1 + 1* NormGrad.^2);
figure();
subplot(2,1,1);imshow([I, q,imsm],[]);
subplot(2,1,2);imshow([Gb1,Gb2],[]);
If I read your example correctly, I think you're already showing what you need to. The Gaussian blurred image (center) has far less edge detail than the guided filter version (right), which also shows some smoothing.

How to display the similarity between classes?

I have handwritten samples from two writers. I am using a feature extractor to extract features from both.
I want to display the similarity between the classes. As to show how identical both are and how difficult it can be for a classifier to classify them correctly.
I have read papers that use PCA to demonstrate this. I tried with PCA but I dont think I'm implementing this correctly. I'm using this to display the similarity.
[COEFF,SCORE] = princomp(features_extracted);
plot(COEFF,'.')
But for every class and every sample I get exactly the same plot. I mean they should be similar not exactly the same. What am I doing wrong?
You will struggle to show anything significant with only 10 samples per class, and over 4000 features.
Nevertheless, the following code will calculate PCA and show the relationship between the first two principal components (the components that contain 'most' variance).
% Truly indistinguishable data
dummy_data = randn(20, 4000);
% Uncomment this to make the data distinguishable
%dummy_data(1:10, :) = dummy_data(1:10, :) - 0.5;
% Normalise the data - this isn't technically required for the dummy data
% above, but is included for completeness.
dummy_data_normalised = dummy_data;
for f = 1:size(a, 2)
dummy_data_normalised(:, f) = dummy_data_normalised(:, f) - nanmean(dummy_data_normalised(:, f));
dummy_data_normalised(:, f) = dummy_data_normalised(:, f) / nanstd(dummy_data_normalised(:, f));
end
% Generate vector of 10 0's and 10 1's
class_labels = reshape(repmat([0 1], 10, 1), 20, 1);
% Perform PCA
pca_coeffs = pca(dummy_data_normalised);
% Calculate transformed data
dummy_data_pca = dummy_data_normalised * pca_coeffs;
figure;
hold on;
for class = unique(class_labels)'
% Plot first two components of first class
scatter(dummy_data_pca(class_labels == class, 1), dummy_data_pca(class_labels == class, 2), 'filled')
end
legend(strcat({'Class '},int2str(unique(class_labels)))')
For indistinguishable data, this will show a scatter plot similar to the following:
Clearly it is not possible to draw a separation boundary between the two classes.
If you uncomment the fifth line to make the data distinguishable, then the plot will instead come out as follows:
However, to repeat what I wrote in my comment, PCA does not necessarily find the components that give the best separation. It is an unsupervised method and only finds the components with the largest variance. In some applications, this is also the components that give good separation. With only 10 samples per class, you will not be able to demonstrate anything statistically significant. Also have a look at this question for more details on PCA and the number of samples per class.
EDIT: This also extends naturally to having more classes:
numer_of_classes = 10;
samples_per_class = 20;
% Truly indistinguishable data
dummy_data = randn(numer_of_classes * samples_per_class, 4000);
% Make the data distinguishable
for i = 1:numer_of_classes
ixd = (((i - 1) * samples_per_class) + 1):(i * samples_per_class);
dummy_data(ixd, :) = dummy_data(ixd, :) - (0.5 * (i - 1));
end
% Normalise the data
dummy_data_normalised = dummy_data;
for f = 1:size(a, 2)
dummy_data_normalised(:, f) = dummy_data_normalised(:, f) - nanmean(dummy_data_normalised(:, f));
dummy_data_normalised(:, f) = dummy_data_normalised(:, f) / nanstd(dummy_data_normalised(:, f));
end
% Generate vector of classes (1 to numer_of_classes)
class_labels = reshape(repmat(1:numer_of_classes, samples_per_class, 1), numer_of_classes * samples_per_class, 1);
% Perform PCA
pca_coeffs = pca(dummy_data_normalised);
% Calculate transformed data
dummy_data_pca = dummy_data_normalised * pca_coeffs;
figure;
hold on;
for class = unique(class_labels)'
% Plot first two components of first class
scatter(dummy_data_pca(class_labels == class, 1), dummy_data_pca(class_labels == class, 2), 'filled')
end
legend(strcat({'Class '},int2str(unique(class_labels)))')

Back Propagation Neural Network Hidden Layer all output is 1

everyone I have created a neural network with 1600 input, one hidden layer with different number of neurons nodes and 24 output neurons.
My code shown that I can decrease the error each epoch, but the output of hidden layer always is 1. Due to this reason, the weight adjusted always produce same result for my testing data.
I try different number of neuron nodes and learning rate in the ANN and also randomly initialize my initial weight. I use sigmoid function as my activate function since my output is either 1 or 0 in different output.
May I know that what is the main reason that causes the output of hidden layer always is 1 and how should i solve it?
My purpose for this neural network is to recognize 24 hand shape for alphabet, I try intensities data in my first phase of project.
I have try 30 hidden neural nodes also 100 neural nodes even 1000 neural nodes but the output of hidden layer still is 1. Due to this reason, all of the outcome in testing data is always similar.
I added the code for my network
Thanks
g = inline('logsig(x)');
[row, col] = size(input);
numofInputNeurons = col;
weight_input_hidden = rand(numofInputNeurons, numofFirstHiddenNeurons);
weight_hidden_output = rand(numofFirstHiddenNeurons, numofOutputNeurons);
epochs = 0;
errorMatrix = [];
while(true)
if(totalEpochs > 0 && epochs >= totalEpochs)
break;
end
totalError = 0;
epochs = epochs + 1;
for i = 1:row
targetRow = zeros(1, numofOutputNeurons);
targetRow(1, target(i)) = 1;
hidden_output = g(input(1, 1:end)*weight_input_hidden);
final_output = g(hidden_output*weight_hidden_output);
error = abs(targetRow - final_output);
error = sum(error);
totalError = totalError + error;
if(error ~= 0)
delta_final_output = learningRate * (targetRow - final_output) .* final_output .* (1 - final_output);
delta_hidden_output = learningRate * (hidden_output) .* (1-hidden_output) .* (delta_final_output * weight_hidden_output');
for m = 1:numofFirstHiddenNeurons
for n = 1:numofOutputNeurons
current_changes = delta_final_output(1, n) * hidden_output(1, m);
weight_hidden_output(m, n) = weight_hidden_output(m, n) + current_changes;
end
end
for m = 1:numofInputNeurons
for n = 1:numofFirstHiddenNeurons
current_changes = delta_hidden_output(1, n) * input(1, m);
weight_input_hidden(m, n) = weight_input_hidden(m, n) + current_changes;
end
end
end
end
totalError = totalError / (row);
errorMatrix(end + 1) = totalError;
if(errorThreshold > 0 && totalEpochs == 0 && totalError < errorThreshold)
break;
end
end
I see a few obvious errors that need fixing in your code:
1) You have no negative weights when initialising. This is likely to get the network stuck. The weight initialisation should be something like:
weight_input_hidden = 0.2 * rand(numofInputNeurons, numofFirstHiddenNeurons) - 0.1;
2) You have not implemented bias. That will severely limit the ability of the network to learn. You should go back to your notes and figure that out, it is usually implemented as an extra column of 1's inserted into input and activation vectors/matrix before determining the activations of each layer, and there should be a matching additional column of weights.
3) Your delta for output layer is wrong. This line
delta_final_output = learningRate * (targetRow - final_output) .* final_output .* (1 - final_output);
. . . is not the delta for the output layer activations. It has some extra unwanted factors.
The correct delta for logloss objective function and sigmoid activation in output layer would be:
delta_final_output = (final_output - targetRow);
There are other possibilities, depending on your objective function, which is not shown. You original code is close to correct for mean squared error, which would probably still work if you changed the sign and removed the factor of learningRate
4) Your delta for hidden layer is wrong. This line:
delta_hidden_output = learningRate * (hidden_output) .* (1-hidden_output) .* (delta_final_output * weight_hidden_output');
. . . is not the delta for the hidden layer activations. You have multiplied by the learningRate for some reason (combined with the other delta that means you have a factor of learningRate squared).
The correct delta would be:
delta_hidden_output = (hidden_output) .* (1-hidden_output) .* (delta_final_output * weight_hidden_output');
5) Your weight update step needs adjusting to match fixes to (3) and (4). These lines:
current_changes = delta_final_output(1, n) * hidden_output(1, m);
would need to be adjusted to get correct sign and learning rate multiplier
current_changes = -learningRate * delta_final_output(1, n) * hidden_output(1, m);
That's 5 bugs from looking through the code, I may have missed some. But I think that's more than enough for now.

Export a neural network trained with MATLAB in other programming languages

I trained a neural network using the MATLAB Neural Network Toolbox, and in particular using the command nprtool, which provides a simple GUI to use the toolbox features, and to export a net object containing the informations about the NN generated.
In this way, I created a working neural network, that I can use as classifier, and a diagram representing it is the following:
There are 200 inputs, 20 neurons in the first hidden layer, and 2 neurons in the last layer that provide a bidimensional output.
What I want to do is to use the network in some other programming language (C#, Java, ...).
In order to solve this problem, I try to use the following code in MATLAB:
y1 = tansig(net.IW{1} * input + net.b{1});
Results = tansig(net.LW{2} * y1 + net.b{2});
Assuming that input is a monodimensional array of 200 elements, the previous code would work if net.IW{1} is a 20x200 matrix (20 neurons, 200 weights).
The problem is that I noticed that size(net.IW{1}) returns unexpected values:
>> size(net.IW{1})
ans =
20 199
I got the same problem with a network with 10000 input. In this case, the result wasn't 20x10000, but something like 20x9384 (I don't remember the exact value).
So, the question is: how can I obtain the weights of each neuron? And after that, can someone explain me how can I use them to produce the same output of MATLAB?
I solved the problems described above, and I think it is useful to share what I've learned.
Premises
First of all, we need some definitions. Let's consider the following image, taken from [1]:
In the above figure, IW stands for initial weights: they represent the weights of neurons on the Layer 1, each of which is connected with each input, as the following image shows [1]:
All the other weights, are called layer weights (LW in the first figure), that are also connected with each output of the previous layer. In our case of study, we use a network with only two layers, so we will use only one LW array to solve our problems.
Solution of the problem
After the above introduction, we can proceed by dividing the issue in two steps:
Force the number of initial weights to match with the input array length
Use the weights to implement and use the neural network just trained in other programming languages
A - Force the number of initial weights to match with the input array length
Using the nprtool, we can train our network, and at the end of the process, we can also export in the workspace some information about the entire training process. In particular, we need to export:
a MATLAB network object that represents the neural network created
the input array used to train the network
the target array used to train the network
Also, we need to generate a M-file that contains the code used by MATLAB to create the neural network, because we need to modify it and change some training options.
The following image shows how to perform these operations:
The M-code generated will be similar to the following one:
function net = create_pr_net(inputs,targets)
%CREATE_PR_NET Creates and trains a pattern recognition neural network.
%
% NET = CREATE_PR_NET(INPUTS,TARGETS) takes these arguments:
% INPUTS - RxQ matrix of Q R-element input samples
% TARGETS - SxQ matrix of Q S-element associated target samples, where
% each column contains a single 1, with all other elements set to 0.
% and returns these results:
% NET - The trained neural network
%
% For example, to solve the Iris dataset problem with this function:
%
% load iris_dataset
% net = create_pr_net(irisInputs,irisTargets);
% irisOutputs = sim(net,irisInputs);
%
% To reproduce the results you obtained in NPRTOOL:
%
% net = create_pr_net(trainingSetInput,trainingSetOutput);
% Create Network
numHiddenNeurons = 20; % Adjust as desired
net = newpr(inputs,targets,numHiddenNeurons);
net.divideParam.trainRatio = 75/100; % Adjust as desired
net.divideParam.valRatio = 15/100; % Adjust as desired
net.divideParam.testRatio = 10/100; % Adjust as desired
% Train and Apply Network
[net,tr] = train(net,inputs,targets);
outputs = sim(net,inputs);
% Plot
plotperf(tr)
plotconfusion(targets,outputs)
Before start the training process, we need to remove all preprocessing and postprocessing functions that MATLAB executes on inputs and outputs. This can be done adding the following lines just before the % Train and Apply Network lines:
net.inputs{1}.processFcns = {};
net.outputs{2}.processFcns = {};
After these changes to the create_pr_net() function, simply we can use it to create our final neural network:
net = create_pr_net(input, target);
where input and target are the values we exported through nprtool.
In this way, we are sure that the number of weights is equal to the length of input array. Also, this process is useful in order to simplify the porting to other programming languages.
B - Implement and use the neural network just trained in other programming languages
With these changes, we can define a function like this:
function [ Results ] = classify( net, input )
y1 = tansig(net.IW{1} * input + net.b{1});
Results = tansig(net.LW{2} * y1 + net.b{2});
end
In this code, we use the IW and LW arrays mentioned above, but also the biases b, used in the network schema by the nprtool. In this context, we don't care about the role of biases; simply, we need to use them because nprtool does it.
Now, we can use the classify() function defined above, or the sim() function equally, obtaining the same results, as shown in the following example:
>> sim(net, input(:, 1))
ans =
0.9759
-0.1867
-0.1891
>> classify(net, input(:, 1))
ans =
0.9759
-0.1867
-0.1891
Obviously, the classify() function can be interpreted as a pseudocode, and then implemented in every programming languages in which is possibile to define the MATLAB tansig() function [2] and the basic operations between arrays.
References
[1] Howard Demuth, Mark Beale, Martin Hagan: Neural Network Toolbox 6 - User Guide, MATLAB
[2] Mathworks, tansig - Hyperbolic tangent sigmoid transfer function, MATLAB Documentation center
Additional notes
Take a look to the robott's answer and the Sangeun Chi's answer for more details.
Thanks to VitoShadow and robott answers, I can export Matlab neural network values to other applications.
I really appreciate them, but I found some trivial errors in their codes and want to correct them.
1) In the VitoShadow codes,
Results = tansig(net.LW{2} * y1 + net.b{2});
-> Results = net.LW{2} * y1 + net.b{2};
2) In the robott preprocessing codes,
It would be easier extracting xmax and xmin from the net variable than calculating them.
xmax = net.inputs{1}.processSettings{1}.xmax
xmin = net.inputs{1}.processSettings{1}.xmin
3) In the robott postprocessing codes,
xmax = net.outputs{2}.processSettings{1}.xmax
xmin = net.outputs{2}.processSettings{1}.xmin
Results = (ymax-ymin)*(Results-xmin)/(xmax-xmin) + ymin;
-> Results = (Results-ymin)*(xmax-xmin)/(ymax-ymin) + xmin;
You can manually check and confirm the values as follows:
p2 = mapminmax('apply', net(:, 1), net.inputs{1}.processSettings{1})
-> preprocessed data
y1 = purelin ( net.LW{2} * tansig(net.iw{1}* p2 + net.b{1}) + net.b{2})
-> Neural Network processed data
y2 = mapminmax( 'reverse' , y1, net.outputs{2}.processSettings{1})
-> postprocessed data
Reference:
http://www.mathworks.com/matlabcentral/answers/14517-processing-of-i-p-data
This is a small improvement to the great Vito Gentile's answer.
If you want to use the preprocessing and postprocessing 'mapminmax' functions, you have to pay attention because 'mapminmax' in Matlab normalizes by ROW and not by column!
This is what you need to add to the upper "classify" function, to keep a coherent pre/post-processing:
[m n] = size(input);
ymax = 1;
ymin = -1;
for i=1:m
xmax = max(input(i,:));
xmin = min(input(i,:));
for j=1:n
input(i,j) = (ymax-ymin)*(input(i,j)-xmin)/(xmax-xmin) + ymin;
end
end
And this at the end of the function:
ymax = 1;
ymin = 0;
xmax = 1;
xmin = -1;
Results = (ymax-ymin)*(Results-xmin)/(xmax-xmin) + ymin;
This is Matlab code, but it can be easily read as pseudocode.
Hope this will be helpful!
I tried to implement a simply 2-layer NN in C++ using OpenCV and then exported the weights to Android which worked quiet well. I wrote a small script which generates a header file with the learned weights and this is used in the following code snipped.
// Map Minimum and Maximum Input Processing Function
Mat mapminmax_apply(Mat x, Mat settings_gain, Mat settings_xoffset, double settings_ymin){
Mat y;
subtract(x, settings_xoffset, y);
multiply(y, settings_gain, y);
add(y, settings_ymin, y);
return y;
/* MATLAB CODE
y = x - settings_xoffset;
y = y .* settings_gain;
y = y + settings_ymin;
*/
}
// Sigmoid Symmetric Transfer Function
Mat transig_apply(Mat n){
Mat tempexp;
exp(-2*n, tempexp);
Mat transig_apply_result = 2 /(1 + tempexp) - 1;
return transig_apply_result;
}
// Map Minimum and Maximum Output Reverse-Processing Function
Mat mapminmax_reverse(Mat y, Mat settings_gain, Mat settings_xoffset, double settings_ymin){
Mat x;
subtract(y, settings_ymin, x);
divide(x, settings_gain, x);
add(x, settings_xoffset, x);
return x;
/* MATLAB CODE
function x = mapminmax_reverse(y,settings_gain,settings_xoffset,settings_ymin)
x = y - settings_ymin;
x = x ./ settings_gain;
x = x + settings_xoffset;
end
*/
}
Mat getNNParameter (Mat x1)
{
// convert double array to MAT
// input 1
Mat x1_step1_xoffsetM = Mat(1, 48, CV_64FC1, x1_step1_xoffset).t();
Mat x1_step1_gainM = Mat(1, 48, CV_64FC1, x1_step1_gain).t();
double x1_step1_ymin = -1;
// Layer 1
Mat b1M = Mat(1, 25, CV_64FC1, b1).t();
Mat IW1_1M = Mat(48, 25, CV_64FC1, IW1_1).t();
// Layer 2
Mat b2M = Mat(1, 48, CV_64FC1, b2).t();
Mat LW2_1M = Mat(25, 48, CV_64FC1, LW2_1).t();
// input 1
Mat y1_step1_gainM = Mat(1, 48, CV_64FC1, y1_step1_gain).t();
Mat y1_step1_xoffsetM = Mat(1, 48, CV_64FC1, y1_step1_xoffset).t();
double y1_step1_ymin = -1;
// ===== SIMULATION ========
// Input 1
Mat xp1 = mapminmax_apply(x1, x1_step1_gainM, x1_step1_xoffsetM, x1_step1_ymin);
Mat temp = b1M + IW1_1M*xp1;
// Layer 1
Mat a1M = transig_apply(temp);
// Layer 2
Mat a2M = b2M + LW2_1M*a1M;
// Output 1
Mat y1M = mapminmax_reverse(a2M, y1_step1_gainM, y1_step1_xoffsetM, y1_step1_ymin);
return y1M;
}
example for a bias in the header could be this:
static double b2[1][48] = {
{-0.19879, 0.78254, -0.87674, -0.5827, -0.017464, 0.13143, -0.74361, 0.4645, 0.25262, 0.54249, -0.22292, -0.35605, -0.42747, 0.044744, -0.14827, -0.27354, 0.77793, -0.4511, 0.059346, 0.29589, -0.65137, -0.51788, 0.38366, -0.030243, -0.57632, 0.76785, -0.36374, 0.19446, 0.10383, -0.57989, -0.82931, 0.15301, -0.89212, -0.17296, -0.16356, 0.18946, -1.0032, 0.48846, -0.78148, 0.66608, 0.14946, 0.1972, -0.93501, 0.42523, -0.37773, -0.068266, -0.27003, 0.1196}};
Now, that Google published Tensorflow, this became obsolete.
Hence the solution becomes (after correcting all parts)
Here I am giving a solution in Matlab, but if you have tanh() function, you may easily convert it to any programming language. It is for just showing the fields from network object and the operations you need.
Assume you have a trained ann (network object) that you want to export
Assume that the name of the trained ann is trained_ann
Here is the script for exporting and testing.
Testing script compares original network result with my_ann_evaluation() result
% Export IT
exported_ann_structure = my_ann_exporter(trained_ann);
% Run and Compare
% Works only for single INPUT vector
% Please extend it to MATRIX version by yourself
input = [12 3 5 100];
res1 = trained_ann(input')';
res2 = my_ann_evaluation(exported_ann_structure, input')';
where you need the following two functions
First my_ann_exporter:
function [ my_ann_structure ] = my_ann_exporter(trained_netw)
% Just for extracting as Structure object
my_ann_structure.input_ymax = trained_netw.inputs{1}.processSettings{1}.ymax;
my_ann_structure.input_ymin = trained_netw.inputs{1}.processSettings{1}.ymin;
my_ann_structure.input_xmax = trained_netw.inputs{1}.processSettings{1}.xmax;
my_ann_structure.input_xmin = trained_netw.inputs{1}.processSettings{1}.xmin;
my_ann_structure.IW = trained_netw.IW{1};
my_ann_structure.b1 = trained_netw.b{1};
my_ann_structure.LW = trained_netw.LW{2};
my_ann_structure.b2 = trained_netw.b{2};
my_ann_structure.output_ymax = trained_netw.outputs{2}.processSettings{1}.ymax;
my_ann_structure.output_ymin = trained_netw.outputs{2}.processSettings{1}.ymin;
my_ann_structure.output_xmax = trained_netw.outputs{2}.processSettings{1}.xmax;
my_ann_structure.output_xmin = trained_netw.outputs{2}.processSettings{1}.xmin;
end
Second my_ann_evaluation:
function [ res ] = my_ann_evaluation(my_ann_structure, input)
% Works with only single INPUT vector
% Matrix version can be implemented
ymax = my_ann_structure.input_ymax;
ymin = my_ann_structure.input_ymin;
xmax = my_ann_structure.input_xmax;
xmin = my_ann_structure.input_xmin;
input_preprocessed = (ymax-ymin) * (input-xmin) ./ (xmax-xmin) + ymin;
% Pass it through the ANN matrix multiplication
y1 = tanh(my_ann_structure.IW * input_preprocessed + my_ann_structure.b1);
y2 = my_ann_structure.LW * y1 + my_ann_structure.b2;
ymax = my_ann_structure.output_ymax;
ymin = my_ann_structure.output_ymin;
xmax = my_ann_structure.output_xmax;
xmin = my_ann_structure.output_xmin;
res = (y2-ymin) .* (xmax-xmin) /(ymax-ymin) + xmin;
end

Problems with real-valued input deep belief networks (of RBMs)

I am trying to recreate the results reported in Reducing the dimensionality of data with neural networks of autoencoding the olivetti face dataset with an adapted version of the MNIST digits matlab code, but am having some difficulty. It seems that no matter how much tweaking I do on the number of epochs, rates, or momentum the stacked RBMs are entering the fine-tuning stage with a large amount of error and consequently fail to improve much at the fine-tuning stage. I am also experiencing a similar problem on another real-valued dataset.
For the first layer I am using a RBM with a smaller learning rate (as described in the paper) and with
negdata = poshidstates*vishid' + repmat(visbiases,numcases,1);
I'm fairly confident I am following the instructions found in the supporting material but I cannot achieve the correct errors.
Is there something I am missing? See the code I'm using for real-valued visible unit RBMs below, and for the whole deep training. The rest of the code can be found here.
rbmvislinear.m:
epsilonw = 0.001; % Learning rate for weights
epsilonvb = 0.001; % Learning rate for biases of visible units
epsilonhb = 0.001; % Learning rate for biases of hidden units
weightcost = 0.0002;
initialmomentum = 0.5;
finalmomentum = 0.9;
[numcases numdims numbatches]=size(batchdata);
if restart ==1,
restart=0;
epoch=1;
% Initializing symmetric weights and biases.
vishid = 0.1*randn(numdims, numhid);
hidbiases = zeros(1,numhid);
visbiases = zeros(1,numdims);
poshidprobs = zeros(numcases,numhid);
neghidprobs = zeros(numcases,numhid);
posprods = zeros(numdims,numhid);
negprods = zeros(numdims,numhid);
vishidinc = zeros(numdims,numhid);
hidbiasinc = zeros(1,numhid);
visbiasinc = zeros(1,numdims);
sigmainc = zeros(1,numhid);
batchposhidprobs=zeros(numcases,numhid,numbatches);
end
for epoch = epoch:maxepoch,
fprintf(1,'epoch %d\r',epoch);
errsum=0;
for batch = 1:numbatches,
if (mod(batch,100)==0)
fprintf(1,' %d ',batch);
end
%%%%%%%%% START POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
data = batchdata(:,:,batch);
poshidprobs = 1./(1 + exp(-data*vishid - repmat(hidbiases,numcases,1)));
batchposhidprobs(:,:,batch)=poshidprobs;
posprods = data' * poshidprobs;
poshidact = sum(poshidprobs);
posvisact = sum(data);
%%%%%%%%% END OF POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
poshidstates = poshidprobs > rand(numcases,numhid);
%%%%%%%%% START NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
negdata = poshidstates*vishid' + repmat(visbiases,numcases,1);% + randn(numcases,numdims) if not using mean
neghidprobs = 1./(1 + exp(-negdata*vishid - repmat(hidbiases,numcases,1)));
negprods = negdata'*neghidprobs;
neghidact = sum(neghidprobs);
negvisact = sum(negdata);
%%%%%%%%% END OF NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
err= sum(sum( (data-negdata).^2 ));
errsum = err + errsum;
if epoch>5,
momentum=finalmomentum;
else
momentum=initialmomentum;
end;
%%%%%%%%% UPDATE WEIGHTS AND BIASES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
vishidinc = momentum*vishidinc + ...
epsilonw*( (posprods-negprods)/numcases - weightcost*vishid);
visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact);
hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact);
vishid = vishid + vishidinc;
visbiases = visbiases + visbiasinc;
hidbiases = hidbiases + hidbiasinc;
%%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
end
fprintf(1, '\nepoch %4i error %f \n', epoch, errsum);
end
dofacedeepauto.m:
clear all
close all
maxepoch=200; %In the Science paper we use maxepoch=50, but it works just fine.
numhid=2000; numpen=1000; numpen2=500; numopen=30;
fprintf(1,'Pretraining a deep autoencoder. \n');
fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch);
load fdata
%makeFaceData;
[numcases numdims numbatches]=size(batchdata);
fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid);
restart=1;
rbmvislinear;
hidrecbiases=hidbiases;
save mnistvh vishid hidrecbiases visbiases;
maxepoch=50;
fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen);
batchdata=batchposhidprobs;
numhid=numpen;
restart=1;
rbm;
hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases;
save mnisthp hidpen penrecbiases hidgenbiases;
fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2);
batchdata=batchposhidprobs;
numhid=numpen2;
restart=1;
rbm;
hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases;
save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2;
fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen);
batchdata=batchposhidprobs;
numhid=numopen;
restart=1;
rbmhidlinear;
hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases;
save mnistpo hidtop toprecbiases topgenbiases;
backpropface;
Thanks for your time
Silly me, I had forgotten to change the back-propagation fine-tuning script (backprop.m). One has to change the output layer (where the faces get reconstructed) to be for real-valued units. I.e.
dataout = w7probs*w8;