Neural Network can't learn XOR - neural-network

I've created a neural network, with the following structure:
Input1 - Input2 - Input layer.
N0 - N1 - Hidden layer. 3 Weights per node (one for bias).
N2 - Output layer. 3 Weights (one for bias).
I am trying to train it the XOR function with the following test data:
0 1 - desired result: 1
1 0 - desired result: 1
0 0 - desired result: 0
1 1 - desired result: 0
After training, the mean square error of test (when looking for a 1 result) {0, 1} = 0, which is good I presume. However the mean square error of test (when looking for a 0 result) {1, 1} = 0.5, which surely needs to be zero. During the learn stage I notice the MSE of true results drops to zero within the first few epochs, whereas MSE of false results lingers around 0.5.
I'm using back propagation to train the network, with a sigmoid function. The issue is that when I test any combination after the training, I always get a 1.0 result ouput. - The network seems to learn very fast, even with an extremely small learning rate.
If it helps, here is the weights that are produced:
N0-W0 = 0.5, N0-W1 = -0.999, N0-W2 = 0.304 (bias) - Hidden Layer
N1-W0 = 0.674, N1-W1 = -0.893, N1-W2 = 0.516 (bias) - Hidden Layer
N2-W0 = -0.243, N2-W1 = 0.955, N3-W2 = 0.369 (bias) - Output node
Thanks.

These are some steps which can help solve your problem:
Change your activation function. Here is a similar question which I answered using relu as the activation function: Neural network XOR gate not learning
Increase the the number of epochs.
Change your learning rate to a larger suitable value, so that you can reach convergence faster. You can find more info here:
How to determine the learning rate and the variance in a gradient descent algorithm?

Related

Problem understanding Loss function behavior using Flux.jl. in Julia

So. First of all, I am new to Neural Network (NN).
As part of my PhD, I am trying to solve some problem through NN.
For this, I have created a program that creates some data set made of
a collection of input vectors (each with 63 elements) and its corresponding
output vectors (each with 6 elements).
So, my program looks like this:
Nₜᵣ = 25; # number of inputs in the data set
xtrain, ytrain = dataset_generator(Nₜᵣ); # generates In/Out vectors: xtrain/ytrain
datatrain = zip(xtrain,ytrain); # ensamble my data
Now, both xtrain and ytrain are of type Array{Array{Float64,1},1}, meaning that
if (say)Nₜᵣ = 2, they look like:
julia> xtrain #same for ytrain
2-element Array{Array{Float64,1},1}:
[1.0, -0.062, -0.015, -1.0, 0.076, 0.19, -0.74, 0.057, 0.275, ....]
[0.39, -1.0, 0.12, -0.048, 0.476, 0.05, -0.086, 0.85, 0.292, ....]
The first 3 elements of each vector is normalized to unity (represents x,y,z coordinates), and the following 60 numbers are also normalized to unity and corresponds to some measurable attributes.
The program continues like:
layer1 = Dense(length(xtrain[1]),46,tanh); # setting 6 layers
layer2 = Dense(46,36,tanh) ;
layer3 = Dense(36,26,tanh) ;
layer4 = Dense(26,16,tanh) ;
layer5 = Dense(16,6,tanh) ;
layer6 = Dense(6,length(ytrain[1])) ;
m = Chain(layer1,layer2,layer3,layer4,layer5,layer6); # composing the layers
squaredCost(ym,y) = (1/2)*norm(y - ym).^2;
loss(x,y) = squaredCost(m(x),y); # define loss function
ps = Flux.params(m); # initializing mod.param.
opt = ADAM(0.01, (0.9, 0.8)); #
and finally:
trainmode!(m,true)
itermax = 700; # set max number of iterations
losses = [];
for iter in 1:itermax
Flux.train!(loss,ps,datatrain,opt);
push!(losses, sum(loss.(xtrain,ytrain)));
end
It runs perfectly, however, it comes to my attention that as I train my model with an increasing data set(Nₜᵣ = 10,15,25, etc...), the loss function seams to increase. See the image below:
Where: y1: Nₜᵣ=10, y2: Nₜᵣ=15, y3: Nₜᵣ=25.
So, my main question:
Why is this happening?. I can not see an explanation for this behavior. Is this somehow expected?
Remarks: Note that
All elements from the training data set (input and output) are normalized to [-1,1].
I have not tryed changing the activ. functions
I have not tryed changing the optimization method
Considerations: I need a training data set of near 10000 input vectors, and so I am expecting an even worse scenario...
Some personal thoughts:
Am I arranging my training dataset correctly?. Say, If every single data vector is made of 63 numbers, is it correctly to group them in an array? and then pile them into an ´´´Array{Array{Float64,1},1}´´´?. I have no experience using NN and flux. How can I made a data set of 10000 I/O vectors differently? Can this be the issue?. (I am very inclined to this)
Can this behavior be related to the chosen act. functions? (I am not inclined to this)
Can this behavior be related to the opt. algorithm? (I am not inclined to this)
Am I training my model wrong?. Is the iteration loop really iterations or are they epochs. I am struggling to put(differentiate) this concept of "epochs" and "iterations" into practice.
loss(x,y) = squaredCost(m(x),y); # define loss function
Your losses aren't normalized, so adding more data can only increase this cost function. However, the cost per data doesn't seem to be increasing. To get rid of this effect, you might want to use a normalized cost function by doing something like using the mean squared cost.

Impact of using relu for gradient descent

What impact does the fact the relu activation function does not contain a derivative ?
How to implement the ReLU function in Numpy implements relu as maximum of (0 , matrix vector elements).
Does this mean for gradient descent we do not take derivative of relu function ?
Update :
From Neural network backpropagation with RELU
this text aids in understanding :
The ReLU function is defined as: For x > 0 the output is x, i.e. f(x)
= max(0,x)
So for the derivative f '(x) it's actually:
if x < 0, output is 0. if x > 0, output is 1.
The derivative f '(0) is not defined. So it's usually set to 0 or you
modify the activation function to be f(x) = max(e,x) for a small e.
Generally: A ReLU is a unit that uses the rectifier activation
function. That means it works exactly like any other hidden layer but
except tanh(x), sigmoid(x) or whatever activation you use, you'll
instead use f(x) = max(0,x).
If you have written code for a working multilayer network with sigmoid
activation it's literally 1 line of change. Nothing about forward- or
back-propagation changes algorithmically. If you haven't got the
simpler model working yet, go back and start with that first.
Otherwise your question isn't really about ReLUs but about
implementing a NN as a whole.
But this still leaves some confusion as the neural network cost function typically takes derivative of activation function, so for relu how does this impact cost function ?
The standard answer is that the input to ReLU is rarely exactly zero, see here for example, so it doesn't make any significant difference.
Specifically, for ReLU to get a zero input, the dot product of one entire row of the input to a layer with one entire column of the layer's weight matrix would have to be exactly zero. Even if you have an all-zero input sample, there should still be a bias term in the last position, so I don't really see this ever happening.
However, if you want to test for yourself, try implementing the derivative at zero as 0, 0.5, and 1 and see if anything changes.
The PyTorch docs give a simple neural network with numpy example with one hidden layer and relu activation. I have reproduced it below with a fixed random seed and three options for setting the behavior of the ReLU gradient at 0. I have also added a bias term.
N, D_in, H, D_out = 4, 2, 30, 1
# Create random input and output data
x = x = np.random.randn(N, D_in)
x = np.c_(x, no.ones(x.shape[0]))
y = x = np.random.randn(N, D_in)
np.random.seed(1)
# Randomly initialize weights
w1 = np.random.randn(D_in+1, H)
w2 = np.random.randn(H, D_out)
learning_rate = 0.002
loss_col = []
for t in range(200):
# Forward pass: compute predicted y
h = x.dot(w1)
h_relu = np.maximum(h, 0) # using ReLU as activate function
y_pred = h_relu.dot(w2)
# Compute and print loss
loss = np.square(y_pred - y).sum() # loss function
loss_col.append(loss)
print(t, loss, y_pred)
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred = 2.0 * (y_pred - y) # the last layer's error
grad_w2 = h_relu.T.dot(grad_y_pred)
grad_h_relu = grad_y_pred.dot(w2.T) # the second laye's error
grad_h = grad_h_relu.copy()
grad_h[h < 0] = 0 # grad at zero = 1
# grad[h <= 0] = 0 # grad at zero = 0
# grad_h[h < 0] = 0; grad_h[h == 0] = 0.5 # grad at zero = 0.5
grad_w1 = x.T.dot(grad_h)
# Update weights
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2

Training bias weight backpropagation

I'm trying to write an XOR solution using neural networks and the sigmoid activation function. (With True=0.9 and False=0.1)
I'm at the backpropagation part now.
The formula I was given for computing weight adjustments is:
delta_weight(l,i,j) = gamma*output(l,i)*error_signal(l,j)
i.e - the weight adjustment for the link between layer 1 (hidden), node 2 and layer 2(output), node 0 is:
delta_weight(1,2,0)
I chose gamma=0.5
Since bias weights are associated with a single node I guessed the weight adjustment formula was:
delta_weight(l,i) = gamma*output(l,i)
My program is not working, clearly my guess was incorrect. Could someone help me along?
Thanks a bunch!
EDIT: CODE
def applyInputs(self, inps):
for i in range(len(self.layers)-1):
for n, node in enumerate(self.layers[i+1].nodes):
ans = 0
for m, mode in enumerate(self.layers[i].nodes):
ans += self.links[stringify(i,m,i+1,n)].weight * mode.output
if node.bias == True:
ans+= self.links[stringify(-1,-1,i+1,n)].weight
node.set_output(response(ans))
return self.layers[len(self.layers)-1].nodes[0].output
def computeErrorSignals(self, out): # 'out' is the output of the entire network (only 1 output node)
# output node error signal
output_node = self.layers[len(self.layers)-1].nodes[0]
fin_err = (out - output_node.output)*output_node.output*(1-output_node.output)
output_node.set_error(fin_err)
# hidden node error signals
for j in range(len(self.layers[1].nodes)):
hid_node = self.layers[1].nodes[j]
err = (hid_node.output)*(1-hid_node.output)*self.layers[2].nodes[0].error_signal*self.links[stringify(1,j,2,0)].weight
hid_node.set_error(err)
def computeWeightAdjustments(self):
for i in range(len(self.layers)-1):
for n, node in enumerate(self.layers[i+1].nodes):
for m, mode in enumerate(self.layers[i].nodes):
self.links[stringify(i,m,i+1,n)].weight += ((0.5)*self.layers[i+1].nodes[n].error_signal*self.layers[i].nodes[m].output)
if node.bias == True:
self.links[stringify(-1,-1,i+1,n)].weight += ((0.5)*self.layers[i].nodes[m].output)

Inconsistent/Different Test Performance/Error After Training Neural Network in Matlab

Keeping all parameters constant, I get different Mean Average Percentage Errors on my test data on retraining the neural network. Why is this so? Aren't all components of the neural network training process deterministic? Sometimes, I see a difference of up to 1% on successive trainings.
The training code is below
netFeb = newfit(trainX', trainY', networkConfigFeb);
netFeb.performFcn = 'mae';
netFeb = trainlm(netFeb, trainX', trainY');
% Index for the testing Data
startingInd = find(trainInd == 0, 1, 'first');
endingInd = startingInd + daysInMonth('Feb') - 1 ;
% Testing Data
testX = X(startingInd:endingInd,:);
testY = dailyPeakLoad(startingInd:endingInd,:);
actualLoadFeb = testY;
% Calculate the Forcast Load and the Mean Absolute Percentage Error
forecastLoadFeb = sim(netFeb, testX'';
errFeb = testY - forecastLoadFeb;
errpct = abs(errFeb)./testY*100;
MAPEFeb = mean(errpct(~isinf(errpct)));
As A. Donda hinted, since neural networks initialize their weights randomly, they will generate different networks after training. Thus it will give you different performance. While the training process is deterministic, the initial values are not! You may end up in different local minimums as a result or stop in different places.
If you wish to see why, take a look at Why should weights of Neural Networks be initialized to random numbers?
Edit 1:
Notes
Since the user is defining the testing/training data manually, there is no randomization of the training data sets selected

How to perform multi-label learning with LSTM using theano?

I have some text data with multiple labels for each document. I want to train a LSTM network using Theano for this dataset. I came across http://deeplearning.net/tutorial/lstm.html but it only facilitates a binary classification task. If anyone has any suggestions on which method to proceed with, that will be great. I just need an initial feasible direction, I can work on.
thanks,
Amit
1) Change the last layer of the model. I.e.
pred = tensor.nnet.softmax(tensor.dot(proj, tparams['U']) + tparams['b'])
should be replaced by some other layer, e.g. sigmoid:
pred = tensor.nnet.sigmoid(tensor.dot(proj, tparams['U']) + tparams['b'])
2) The cost should also be changed.
I.e.
cost = -tensor.log(pred[tensor.arange(n_samples), y] + off).mean()
should be replaced by some other cost, e.g. cross-entropy:
one = np.float32(1.0)
pred = T.clip(pred, 0.0001, 0.9999) # don't piss off the log
cost = -T.sum(y * T.log(pred) + (one - y) * T.log(one - pred), axis=1) # Sum over all labels
cost = T.mean(cost, axis=0) # Compute mean over samples
3) In the function build_model(tparams, options), you should replace:
y = tensor.vector('y', dtype='int64')
by
y = tensor.matrix('y', dtype='int64') # Each row of y is one sample's label e.g. [1 0 0 1 0]. sklearn.preprocessing.MultiLabelBinarizer() may be handy.
4) Change pred_error() so that it supports multilabel (e.g. using some metrics like accuracy or F1 score from scikit-learn).
You can change the last layer of the model. It would have a vector of target where each element is 0 or 1, depending if you have the target or not.