I'm trying to write a neural Network for binary classification in PyTorch and I'm confused about the loss function.
I see that BCELoss is a common function specifically geared for binary classification. I also see that an output layer of N outputs for N possible classes is standard for general classification. However, for binary classification it seems like it could be either 1 or 2 outputs.
So, should I have 2 outputs (1 for each label) and then convert my 0/1 training labels into [1,0] and [0,1] arrays, or use something like a sigmoid for a single-variable output?
Here are the relevant snippets of code so you can see:
self.outputs = nn.Linear(NETWORK_WIDTH, 2) # 1 or 2 dimensions?
def forward(self, x):
# other layers omitted
x = self.outputs(x)
return F.log_softmax(x) # <<< softmax over multiple vars, sigmoid over one, or other?
criterion = nn.BCELoss() # <<< Is this the right function?
net_out = net(data)
loss = criterion(net_out, target) # <<< Should target be an integer label or 1-hot vector?
Thanks in advance.
For binary outputs you can use 1 output unit, so then:
self.outputs = nn.Linear(NETWORK_WIDTH, 1)
Then you use sigmoid activation to map the values of your output unit to a range between 0 and 1 (of course you need to arrange your training data this way too):
def forward(self, x):
# other layers omitted
x = self.outputs(x)
return torch.sigmoid(x)
Finally you can use the torch.nn.BCELoss:
criterion = nn.BCELoss()
net_out = net(data)
loss = criterion(net_out, target)
This should work fine for you.
You can also use torch.nn.BCEWithLogitsLoss, this loss function already includes the sigmoid function so you could leave it out in your forward.
If you, want to use 2 output units, this is also possible. But then you need to use torch.nn.CrossEntropyLoss instead of BCELoss. The Softmax activation is already included in this loss function.
Edit: I just want to emphasize that there is a real difference in doing so. Using 2 output units gives you twice as many weights compared to using 1 output unit.. So these two alternatives are not equivalent.
Some theoretical add up:
For binary classification (say class 0 & class 1), the network should have only 1 output unit. Its output will be 1 (for class 1 present or class 0 absent) and 0 (for class 1 absent or class 0 present).
For loss calculation, you should first pass it through sigmoid and then through BinaryCrossEntropy (BCE). Sigmoid transforms the output of the network to probability (between 0 and 1) and BCE then maximizes the likelihood of the desired output.
Related
I implemented a MLP with a Backpropagation algorithm, it works fine for only one entry, for example, if the input is 1 and 1 the answers on the last layer will be 1 and 0.
Let's suppose that instead of having only one entry (like 1,1) I have four entries, (1,1; 1,0; 0,0; 0,1), all of them have different expected answers.
I need to train this MLP and it needs to answer correctly to all entries.
I'm not finding a way to do this. Let's suppose that I have 1000 epochs, in this case I would need to train every entry for 250 epochs? Train one epoch with 1 entry then the next epoch with another entry?
How I could properly train a MLP to answer correctly to all entries?
at least for a python implementation, you can simply use multidimensional training data
# training a neural network to behave like an XOR gate
import numpy as np
X = np.array([[1,0],[0,1],[1,1],[0,0]]) # entries
y = np.array([[1],[1],[0],[0]]) # expected answers
INPUTS = X.shape[1]
HIDDEN = 12
OUTPUTS = y.shape[1]
w1 = np.random.randn(INPUTS, HIDDEN) * np.sqrt(2 / INPUTS)
w2 = np.random.randn(HIDDEN, OUTPUTS) * np.sqrt(2 / HIDDEN)
ALPHA = 0.5
EPOCHS = 1000
for e in range(EPOCHS):
z1 = sigmoid(X.dot(w1))
o = sigmoid(z1.dot(w2))
o_error = o - y
o_delta = o_error * sigmoidPrime(o)
w2 -= z1.T.dot(o_delta) * ALPHA
w2_error = o_delta.dot(w2.T)
w2_delta = w2_error * sigmoidPrime(z1)
w1 -= X.T.dot(w2_delta) * ALPHA
print(np.mean(np.abs(o_error))) # prints the loss of the NN
such an approach might not work with some neural network libraries, but that shouldn't matter, because neural network libraries will usually handle stuff like that themselves
the reason this works is that during the dot product between the input and hidden layer, each training entry gets matrix-multiplied with the entire hidden layer individually, so the result is a matrix containing the result for each sample forwarded through the hidden layer
and this process continues throughout the entire network, so what you are essentially doing is running multiple instances of the same neural network in parallel
the number of training entries doesn't have to be four, it can be any arbitrarily high number, as long as the size of its contents is the same as the input layer for X and the output layer for y and X and y are the same length (and you have enough RAM)
also, nothing about the neural network architecture is fundamentally changed from using single entries, only the data that is feeded into it has changed, so you don't have to scrap the code you've written, just make a few small changes most likely
My dataset contains labels as 0 and 1 containing 100 examples each with feature dimension 39. There are50 examples belonging to class 1 and rest 50 belonging to class 0. The graphical output shows only one output instead of two. There should be two output nodes since there are two categories. I am flabbergasted why this is happening. The following is the code. Shall be grateful for your help.
hiddenlayersize = 5;
net = patternnet(hiddenlayersize);
net = init(net);
netperformFcn = 'crossentropy';
[net] = train(net,x,t);
out = sim(net,x);
Below is the model:
Also, out is not in binary. How do I get the predicted labels in binary as well?
The classification outputs the results in the form of probabilities - your results are fine.
Default threshold is 0.5 for converting probabilties to 2 classes say 0 and 1.
You can fine-tune threshold - by moving up and low and further analysing the outcomes like false positives , false negatives ,precision-recall curves etc. depending upon what the objective is.
Hope this helps.
I have question please; concerning cross validation, for me the cross-validation is used to find the best parameters.
but I did not understand the role of this function "crossvalind":Generate cross-validation indices, it just takes a data set without model, like in this exemple :
load fisheriris
[g gn] = grp2idx(species);
[trainIdx testIdx] = crossvalind('HoldOut', species, 1/3);
crossvalind() function splits your data in two groups: the training set and the cross-validation set.
By your example:
[trainIdx testIdx] = crossvalind('HoldOut', size(species,1), 1/3); means split the data in species (2/3 in the training set and 1/3 in the cross-validation set).
Supposing that your data is like:
species=[datarow1;datarow2;datarow3;datarow4;datarow5;datarow6] then
trainIdx would be like [1;1;0;1;1;0] and testIdx would be like [0;0;1;0;0;1] meaning that from the 6 total elements in our set crossvalind function assigned 4 to the train set and 2 to the cross-validation set. Of course this is a random assignment meaning that the zero and ones indices will vary every time you call the function but the proportion between them will be fixed and trainIdx + testIdx will always be ones(size(species,1),1)
crossvalind('LeaveMout',size(species,1),2) would be exactly the same as crossvalind('HoldOut', size(species,1), 1/3) in this particular case. In the 'HoldOut' format you provide parameter P which takes values from 0 to 1 (like 1/3 in the example above) while with the option 'LeaveMout' you provide integer M like 2 samples from the 6 total or like 2000 samples from the 10000 total samples in your dataset. In case of 'Resubstitution': crossvalind('Resubstitution', size(species,1), [1/3,2/3]) would be yet the same but here you also have the option of let's say [1/3,3/4] meaning that some samples can be on both the train and cross-validation sets, or even [1,1] which means that all the samples are used in both sets (trainIdx=testIdx=[1;1;1;1;1;1] in the above example). I strongly suggest to type help crossvalind and take a look at the help file which is always a lot more detailed and helpful than i could ever be.
I've got a problem with implementing multilayered perceptron with Matlab Neural Networks Toolkit.
I try to implement neural network which will recognize single character stored as binary image(size 40x50).
Image is transformed into a binary vector. The output is encoded in 6bits. I use simple newff function in that way (with 30 perceptrons in hidden layer):
net = newff(P, [30, 6], {'tansig' 'tansig'}, 'traingd', 'learngdm', 'mse');
Then I train my network with a dozen of characters in 3 different fonts, with following train parameters:
net.trainParam.epochs=1000000;
net.trainParam.goal = 0.00001;
net.traxinParam.lr = 0.01;
After training net recognized all characters from training sets correctly but...
It cannot recognize more then twice characters from another fonts.
How could I improve that simple network?
you can try to add random elastic distortion to your training set (in order to expand it, and making it more "generalizable").
You can see the details on this nice article from Microsoft Research :
http://research.microsoft.com/pubs/68920/icdar03.pdf
You have a very large number of input variables (2,000, if I understand your description). My first suggestion is to reduce this number if possible. Some possible techniques include: subsampling the input variables or calculating informative features (such as row and column total, which would reduce the input vector to 90 = 40 + 50)
Also, your output is coded as 6 bits, which provides 32 possible combined values, so I assume that you are using these to represent 26 letters? If so, then you may fare better with another output representation. Consider that various letters which look nothing alike will, for instance, share the value of 1 on bit 1, complicating the mapping from inputs to outputs. An output representation with 1 bit for each class would simplify things.
You could use patternnet instead of newff, this creates a network more suitable for pattern recognition. As target function use a 26-elements vector with 1 in the right letter's position (0 elsewhere). The output of the recognition will be a vector of 26 real values between 0 and 1, with the recognized letter with the highest value.
Make sure to use data from all fonts for the training.
Give as input all data sets, train will automatically divide them into train-validation-test sets according to the specified percentages:
net.divideParam.trainRatio = .70;
net.divideParam.valRatio = .15;
net.divideParam.testRatio = .15;
(choose you own percentages).
Then test using only the test set, you can find their indices into
[net, tr] = train(net,inputs,targets);
tr.testInd
I'm working on creating a 2 layer neural network with back-propagation. The NN is supposed to get its data from a 20001x17 vector that holds following information in each row:
-The first 16 cells hold integers ranging from 0 to 15 which act as variables to help us determine which one of the 26 letters of the alphabet we mean to express when seeing those variables. For example a series of 16 values as follows are meant to represent the letter A: [2 8 4 5 2 7 5 3 1 6 0 8 2 7 2 7].
-The 17th cell holds a number ranging from 1 to 26 representing the letter of the alphabet we want. 1 stands for A, 2 stands for B etc.
The output layer of the NN consists of 26 outputs. Every time the NN is fed an input like the one described above it's supposed to output a 1x26 vector containing zeros in all but the one cell that corresponds to the letter that the input values were meant to represent. for example the output [1 0 0 ... 0] would be letter A, whereas [0 0 0 ... 1] would be the letter Z.
Some things that are important before i present the code: I need to use the traingdm function and the hidden layer number is fixed (for now) at 21.
Trying to create the above concept i wrote the following matlab code:
%%%%%%%%
%Start of code%
%%%%%%%%
%
%Initialize the input and target vectors
%
p = zeros(16,20001);
t = zeros(26,20001);
%
%Fill the input and training vectors from the dataset provided
%
for i=2:20001
for k=1:16
p(k,i-1) = data(i,k);
end
t(data(i,17),i-1) = 1;
end
net = newff(minmax(p),[21 26],{'logsig' 'logsig'},'traingdm');
y1 = sim(net,p);
net.trainParam.epochs = 200;
net.trainParam.show = 1;
net.trainParam.goal = 0.1;
net.trainParam.lr = 0.8;
net.trainParam.mc = 0.2;
net.divideFcn = 'dividerand';
net.divideParam.trainRatio = 0.7;
net.divideParam.testRatio = 0.2;
net.divideParam.valRatio = 0.1;
%[pn,ps] = mapminmax(p);
%[tn,ts] = mapminmax(t);
net = init(net);
[net,tr] = train(net,p,t);
y2 = sim(net,pn);
%%%%%%%%
%End of code%
%%%%%%%%
Now to my problem: I want my outputs to be as described, namely each column of the y2 vector for example should be a representation of a letter. My code doesn't do that though. Instead it produced results that vary greatly between 0 and 1, values from 0.1 to 0.9.
My question is: is there some conversion i need to be doing that i am not? Meaning, do i have to convert my input and/or output data to a form by which i can actually see if my NN is learning correctly?
Any input would be appreciated.
This is normal. Your output layer is using a log-sigmoid transfer function, and that will always give you some intermediate output between 0 and 1.
What you would usually do would be to look for the output with the largest value -- in other words, the most likely character.
This would mean that, for every column in y2, you're looking for the index of the row that contains the largest value in that row. You can compute this as follows:
[dummy, I]=max(y2);
I is then a vector containing the indexes of the largest value in each row.
You can think of y2 as an output probability distribution for each input being one of the 26 alphabet characters, for example if one column of y2 says:
.2
.5
.15
.15
then its 50% probability that this character is B (if we assume only 4 possible outputs).
==REMARK==
The output layer of the NN consists of
26 outputs. Every time the NN is fed
an input like the one described above
it's supposed to output a 1x26 vector
containing zeros in all but the one
cell that corresponds to the letter
that the input values were meant to
represent. for example the output [1 0
0 ... 0] would be letter A, whereas [0
0 0 ... 1] would be the letter Z.
It is preferable to avoid using target values of 0,1 to encode the output of the network.
The reason for avoiding target values of 0 and 1 is that 'logsig' sigmoid transfer function cannot produce these output values given finite weights. If you attempt to train the network to fit target values of exactly 0 and 1, gradient descent will force the weights to grow without bound.
So instead of 0 and 1 values, try using values of 0.04 and 0.9 for example, so that [0.9,0.04,...,0.04] is the target output vector for the letter A.
Reference:
Thomas M. Mitchell, Machine Learning, McGraw-Hill Higher Education, 1997, p114-115
Use hardlin fcn in output layer.
Use trainlm or trainrp for training the network.
To learn your network, use a for loop and a condition that compare the output and target. When it is the best use, break to exit from the learning loop.
Use another way instead of mapminmax for pre-processing data set.
I don't know if this constitutes an actual answer or not: but here are some remarks.
I don't understand your coding scheme. How is an 'A' represented as that set of numbers? It looks like you're falling into a fairly common trap of using arbitrary numbers to code categorical values. Don't do this: for example if 'a' is 1, 'b' is 2 and 'c' is 3, then your coding has implicitly stated that 'a' is more like 'b' than 'c' (because the network has real-value inputs the ordinal properties matter). The way to do this properly is to have each letter represented as 26 binary valued inputs, where only one is ever active, representing the letter.
Your outputs are correct, the activation at the output layer will not
ever be either 0 or 1, but real numbers. You could take the max as
your activity function, but this is problematic because it's not
differentiable, so you can't use back-prop. What you should do is
couple the outputs with the softmax function, so that their sum
is one. You can then treat the outputs as conditional probabilities
given the inputs, if you so desire. While the network is not
explicitly probabilistic, with the correct activity and activation
functions is will be identical in structure to a log-linear model
(possibly with latent variables corresponding to the hidden layer),
and people do this all the time.
See David Mackay's textbook for a nice intro to neural nets which will make clear the probabilistic connection. Take a look at this paper from Geoff Hinton's group which describes the task of predicting the next character given the context for details on the correct representation and activation/activity functions (although beware their method is non-trivial and uses a recurrent net with a different training method).