Meaning of Bias with zero inputs in Perception at ANNs - neural-network

I'm student in a graduate computer science program. Yesterday we had a lecture about neural networks.
I think I understood the specific parts of a perceptron in neural networks with one exception. I already made my research about the bias in an perceptron- but still I didn't got it.
So far I know that, with the bias I can manipulate the sum over the inputs with there weights in a perception to evaluate that the sum minus a specific bias is bigger than the activation function threshold - if the function should fire (Sigmoid).
But on the presentation slides from my professor he mentioned something like this:
The bias is added to the perceptron to avoid issues where all inputs
could be equal to zero - no multiplicative weight would have an effect
I can't figure out whats the meaning behind this sentence and why is it important, that sum over all weighted inputs can't be equal to zero ?. If all inputs are equal to zero, there should be no impact on the next perceptions in the next hidden layer, right? Furthermore this perception is a static value for backpropagation and has no influence on changing this weights at the perception.
Or am I wrong?
Has anyone a solution for that?
thanks in advance

Bias
A bias is essentially an offset.
Imagine the simple case of a single perceptron, with a relationship between the input and the output, say:
y = 2x + 3
Without the bias term, the perceptron could match the slope (often called the weight) of "2", meaning it could learn:
y = 2x
but it could not match the "+ 3" part.
Although this is a simple example, this logic scales to neural networks in general. The neural network can capture nonlinear functions, but often it needs an offset to do so.
What you asked
What your professor said is another good example of why an offset would be needed. Imagine all the inputs to a perceptron are 0. A perceptron's output is the sum of each of the inputs multiplied by a weight. This means that each weight is being multiplied by 0, then added together. Therefore, the result will always be 0.
With a bias, however, the output could still retain a value.

Related

What is the purpose of a bias in an artificial neural network?

I read in a book, where the author mentioned that the bias bk is used to produce an affine transform to the output uk (The summation of weighted input signals).
Also, the author mentioned that due to this bias that gives a constant value of, say 'k' makes the neuron not connected to the previous layer.
I am in a confused state. Can someone please tell me what the above two points mean, and if there are any other uses of a bias to the network?
Thanks in advance!
If the neurons activation is z(a) = wa + b, b is the bias. It's a bias because the larger it is, the more this neuron is biased, or in other words it doesnt care much about what was passed to it (a) from the last layer. I'm assuming the second point is referring to the fact that if a bias is large enough (positive or negative) it is like the neuron no longer cares what is passed to it, it's always going to pass the same thing to the next layer. I would need to see it in context to be certain about what the author is saying, but overall you just need to understand that it is a constant that can add bias (doesnt care about what the last layer gave it). Dont fret too much about its implications though, because the learning (or optimization) process is going to adjust these automatically so you're not going to have to choose proper bias values for the network. As you become more familiar with the concepts it will start to make more sense

Matlab: Dealing with denorm performance cost conversion when close to realmin in backprop

I understand that if a number gets closer to zero than realmin, then Matlab converts the double to a denorm . I am noticing this causes significant performance cost. In particular I am using a gradient descent algorithm that when near convergence, the gradients (in backprop for my bespoke neural network) drop below realmin such that the algorithm incurs heavy performance cost (due to, I am assuming, type conversion behind the scenes). I have used the following code to validate my gradient matrices so that no numbers falls below realmin:
function mat= validateSmallDoubles(obj, mat, threshold)
mat= mat.*(abs(mat)>threshold);
end
Is this usual practice and what value should threshold take (obviously you want this as close to realmin as possible, but not too close otherwise any additional division operations will send some elements of mat below realmin after validation)?. Also, specifically for neural networks, where are the best places to do gradient validation without ruining the network's ability to learn?. I would be grateful to know what solutions people with experience in training neural networks have? I am sure this is a problem for all languages. Tentative threshold values have ruined my network's learning.
I do not know if it is somehow related to your problem, but I had a similar problem with underflows while doing exponentially weighted average of gradients (say while implementing Momentum or Adam).
In particular, at some point you do something like:
v := 0.9*v + 0.1*gradient where v is the exponentially weighted average of your gradient g. If in a lot of successive iterations a same element of your g matrix remains 0, your v is quickly becoming very small and you hit dernormals.
So the problem, is why all those zeros ? In my case the culprit where the ReLu units which outputed a lot of zeros (if x<0 , relu(x) is zero). Because when Relu outputs zero on a given neurons the related weight has no effect it means the corresponding partial derivative will be zero in g. So it happened to me that in a lot of successive iterations that particular neuron was not fired.
To avoiding having zero activations (and derivatives), I used "leaky relu" so to have a very small derivative instead.
Another solution, is to use gradient clipping before applying your weighted average to threshold your gradients to a minimum value. Which is quite similar to what you did.
I traced the diminishing gradient occurrences to the Adam SGD optimiser - the biased moving average matrix calculations in the Adam optimiser were causing matlab to carry out the denorm operation. I simply thresholded the matrix elements for each layer after these calculations, with threshold=10*realmin, to zero without any effect on learning. I have yet to investigate why my moving averages were getting so close to zero as my architecture and weight initialisation priors would normally mitigate this.

Basic intuition for neural networks?

There are lots of "introduction to neural networks" articles online, but most are an introduction to the math of artificial neural networks and not an introduction to the actual underlying concepts (even though they should be one and the same). How does a simple network of artificial neurons actually work?
This answer is roughly based on the beginning of "Neural Networks and Deep Learning" by M. A. Nielsen which is definitely worth reading - it's online and free.
The fundamental idea behind all neural networks is this: Each neuron in a neural network makes a decision. Once you understand how they do that, everything else will make sense. Let’s walk through a simple situation which will help us arrive at that understanding.
Let’s say you are trying to decide whether or not to wear a hat today. There are a number of factors which will affect your decision, and perhaps the most important ones are:
Is it sunny?
Do I have a hat to wear?
Would a hat suit my outfit?
For simplicity, we’ll assume these are the only three factors that you’re weighing up during this decision. Forgetting about neural networks for a second, let’s just try to build a ‘decision maker’ to help us answer this question.
First, we can see each question has a certain level of importance, and so we’ll need to use this relative importance of each question, along with the corresponding answer to each question, to make our decision.
Secondly, we’ll need to have some component which interprets each (yes or no) answer along with its importance to produce the final answer. This sounds simple enough to put into an equation, right? Let’s do it. We simply decide how important each factor is and multiply that importance (or ‘weight’) by the answer to the question (which can be 0 or 1):
3a + 5b + 2c > 6
The numbers 3, 5 and 2 are the ‘weights’ of question a, b and c, respectively. a, b and c, themselves can be either zero (the answer to the question was ‘no’), or one (the answer to the question was ‘yes’). If the above equation is true, then the decision is to wear a hat, and if it is false, the decision is to not wear a hat. The equation says that we’ll only wear a hat if the sum of our weights multiplied by our factors is greater than some threshold value. Above, I chose a threshold value of 6. If you think about it, this means that if I don’t have a hat to wear (b=0), no matter what the other answers are, I won’t be wearing a hat today. That is,
3a + 2c > 6
is never true, since a and c are only either 0 or 1. This makes sense – our simple decision model tells us not to wear a hat if we don’t have one! So the weights of 3, 5 and 2, and the threshold value of 6 seem like a good choices for our simple “should I wear a hat” decision-maker. It also means that, as long as I have a hat to wear, the sun shining (a=1) OR the hat suiting my outfit (c=1) is enough to make me wear a hat today. That is,
5 + 3 > 6 and 5 + 2 > 6
are both true. Good! You can see that by adjusting the weighting of each factor and the threshold, and by adding more factors, we can adjust our ‘decision maker’ to approximately model any decision-making process. What we have just demonstrated is the functionality of a simple neuron (a decision-maker!). Let’s put the above equation into ‘neuron-form’:
A neuron which processes 3 factors: a, b, c, with corresponding importance weightings of 3, 5, 2, and with a decision threshold of 6.
The neuron has 3 input connections (the factors) and 1 output connection (the decision). Each input connection has a weighting which encodes the importance of that connection. If the weighting of that connection is low (relative to the other weights), then it won’t have much effect on the decision. If it’s high, the decision will heavily depend on it.
This is great, we’ve got a fully working neuron that weights inputs and makes decisions. So here’s the next though: What if the output (our decision) was fed into the input of another neuron? That neuron would be using our decision about our hat to make a more abstract decision. And what if the inputs a, b and c are themselves the outputs of other neurons which compute lower-level decisions? We can see that neural networks can be interpreted as networks which compute decisions about decisions, leading from simple input data to more and more complex ‘meta-decisions’. This, to me, is an incredible concept. All the complexity of even the human brain can be modelled using these principles. From the level of photons interacting with our cone-cells right up to our pondering of the meaning of life, it’s just simple little decision-making neurons.
Below is a diagram of a simple neural network which essentially has 3 layers of abstraction:
A simple neural network with 2 inputs and 2 outputs.
As an example, the above inputs could be 2 infrared distance sensors, and the outputs might control control the on/off switch for 2 motors which drive the wheels of a robot.
In our simple hat example, we could pick the weights and the threshold quite easily, but how do we pick the weights and thresholds in this example so that, say, the robot can follow things that move? And how do we know how many neurons we need to solve this problem? Could we solve it with just 1 neuron, maybe 2? Or do we need 20? And how do we organise them? In layers? Modules? These questions are the questions in the field of neural networks. Techniques such as ‘backpropagation’ and (more recently) ‘neuroevolution’ are used effectively to answer some of these troubling questions, but these are outside the scope of this introduction – Wikipedia and Google Scholar and free online textbooks like “Neural Networks and Deep Learning” by M. A. Nielsen are great places to start learning about these concepts.
Hopefully you now have some intuition for how neural networks work, but if you’re interested in actually implementing a neural network there are a few optimisations and extensions to our concept of a neuron which.will make our neural nets more efficient and effective.
Firstly, notice that if we set the threshold value of the neuron to zero, we can always adjust the weightings of the inputs to account for this – only, we’ll also need to allow negative values for our weights. This is great since it removes one variable from our neuron. So we’ll allow negative weights and from now on we won’t need to worry about setting a threshold – it’ll always be zero.
Next, we’ll notice that the weights of the input connections are all relative to one-another, so we can actually normalise these to a value between -1 and 1. Cool. That simplifies things a little.
We can make a further, more substantial improvement to our decision-maker by realising that the inputs themselves (a, b and c in the above example) need not just be 0 or 1. For example, what if today is really sunny? Or maybe there’s scattered clouds, do it’s intermittently sunny? We can see that by allowing values between 0 and 1, our neuron gets more information and can therefore make a better decision – and the good news is, we don’t need to change anything in our neuron model!
So far, we’ve allowed the neuron to accept inputs between 0 and 1, and we’ve normalised the weights between -1 and 1 for convenience.
The next question is: why do we need such certainty in our final decision (i.e. the output of the neuron)? Why can’t it, like the inputs, also be a value between 0 and 1? If we did allow this, the decision of whether or not to wear a hat would become a level of certainty that wearing a hat is the right choice. But if this is a good idea, why did I introduce a threshold at all? Why not just directly pass on the sum of the weighted inputs to the output connection? Well, because, for reasons beyond the scope of this simple introduction to neural networks, it turns out that a neural network works better if the neurons are allowed to make something like an ‘educated guess’, rather than just presenting a raw probability. A threshold gives the neurons a slight bias toward certainty and allows them to be more ‘assertive’, and doing so makes neural networks more efficient. So in that sense, a threshold is good. But the problem with a threshold is that it doesn’t let us know when the neuron is uncertain about its decision – that is, if the sum of the weighted inputs is very close to the threshold, the neuron makes a definite yes/no answer where a definite yes/no answer is not ideal.
So how can we overcome this problem? Well it turns out that if we replace our “greater than zero” condition with a continuous function (called an ‘activation function’), then we can choose non-binary and non-linear reactions to the neuron’s weighted inputs. Let’s first look at our original “greater than zero” condition as a function:
‘Step’ function representing the original neuron’s ‘activation function’.
In the above activation function, the x-axis represents the sum of the weighted inputs and the y-axis represents the neuron’s output. Notice that even if the inputs sum to 0.01, the output is a very certain 1. This is not ideal, as we’ve explained earlier. So we need another activation function that only has a bias towards certainty. Here’s where we welcome the ‘sigmoid’ function:
The ‘sigmoid’ function; a more effective activation function for our artificial neural networks.
Notice how it looks like a halfway point between a step function (which we established as too certain) and a linear x=y line that we’d expect from a neuron which just outputs the raw probability that some some decision is correct. The equation for this sigmoid function is:
where x is the sum of the weighted inputs.
And that’s it! Our new-and-improved neuron does the following:
Takes multiple inputs between 0 and 1.
Weights each one by a value between -1 and 1.
Sums them all together.
Puts that sum into the sigmoid function.
Outputs the result!
It's deceptively simple, but by combining these simple decision-makers together and finding ideal connection weights, we can make arbitrarily complex decisions and calculations which stretch far beyond what our biological brains allow.

Why do we take the derivative of the transfer function in calculating back propagation algorithm?

What is the concept behind taking the derivative? It's interesting that for somehow teaching a system, we have to adjust its weights. But why are we doing this using a derivation of the transfer function. What is in derivation that helps us. I know derivation is the slope of a continuous function at a given point, but what does it have to do with the problem.
You must already know that the cost function is a function with the weights as the variables.
For now consider it as f(W).
Our main motive here is to find a W for which we get the minimum value for f(W).
One of the ways for doing this is to plot function f in one axis and W in another....... but remember that here W is not just a single variable but a collection of variables.
So what can be the other way?
It can be as simple as changing values of W and see if we get a lower value or not than the previous value of W.
But taking random values for all the variables in W can be a tedious task.
So what we do is, we first take random values for W and see the output of f(W) and the slope at all the values of each variable(we get this by partially differentiating the function with the i'th variable and putting the value of the i'th variable).
now once we know the slope at that point in space we move a little further towards the lower side in the slope (this little factor is termed alpha in gradient descent) and this goes on until the slope gives a opposite value stating we already reached the lowest point in the graph(graph with n dimensions, function vs W, W being a collection of n variables).
The reason is that we are trying to minimize the loss. Specifically, we do this by a gradient descent method. It basically means that from our current point in the parameter space (determined by the complete set of current weights), we want to go in a direction which will decrease the loss function. Visualize standing on a hillside and walking down the direction where the slope is steepest.
Mathematically, the direction that gives you the steepest descent from your current point in parameter space is the negative gradient. And the gradient is nothing but the vector made up of all the derivatives of the loss function with respect to each single parameter.
Backpropagation is an application of the Chain Rule to neural networks. If the forward pass involves applying a transfer function, the gradient of the loss function with respect to the weights will include the derivative of the transfer function, since the derivative of f(g(x)) is f’(g(x))g’(x).
Your question is a really good one! Why should I move the weight more in one direction when the slope of the error wrt. the weight is high? Does that really make sense? In fact it does makes sense if the error function wrt. the weight is a parabola. However it is a wild guess to assume it is a parabola. As rcpinto says, assuming the error function is a parabola, make the derivation of the a updates simple with the Chain Rule.
However, there are some other parameter update rules that actually addresses this, non-intuitive assumption. You can make update rule that takes the weight a fixed size step in the down-slope direction, and then maybe later in the training decrease the step size logarithmic as you train. (I'm not sure if this method has a formal name.)
There are also som alternative error function that can be used. Look up Cross Entropy in you neural network text book. This is an adjustment to the error function such that the derivative (of the transfer function) factor in the update rule cancels out. Just remember to pick the right cross entropy function based on you output transfer function.
When I first started getting into Neural Nets, I had this question too.
The other answers here have explained the math which makes it pretty clear that a derivative term will appear in your calculations while you are trying to update the weights. But all of those calculations are being done in order to implement Back-propagation, which is just one of the ways of updating weights! Now read on...
You are correct in assuming that at the end of the day, all a neural network tries to do is update its weights to fit the data you feed into it. Within this statement lies your answer too. What you are getting confused with here is the idea of the Back-propagation algorithm. Many textbooks use backprop to update neural nets by default but do not mention that there are other ways to update weights too. This leads to the confusion that neural nets and backprop are the same thing and are inherently connected. This also leads to the false belief that neural nets need backprop to train.
Please remember that Back-propagation is just ONE of the ways out there to train your neural network (although it is the most famous one). Now, you must have seen the math involved in backprop, and hence you can see where the derivative term comes in from (some other answers have also explained that). It is possible that other training methods won't need the derivatives, although most of them do. Read on to find out why....
Think about this intuitively, we are talking about CHANGING weights, the direct mathematical operation related to change is a derivative, makes sense that you should need to evaluate derivatives to change weights.
Do let me know if you are still confused and I'll try to modify my answer to make it better. Just as a parting piece of information, another common misconception is that gradient descent is a part of backprop, just like it is assumed that backprop is a part of neural nets. Gradient descent is just one way to minimize your cost function, there are plenty of others you can use. One of the answers above makes this wrong assumption too when it says "Specifically Gradient Descent". This is factually incorrect. :)
Training a neural network means minimizing an associated "error" function wrt the networks weights. Now there are optimization methods that use only function values (Simplex method of Nelder and Mead, Hooke and Jeeves, etc), methods that in addition use first derivatives (steepest descend, quasi Newton, conjugate gradient) and Newton methods using second derivatives as well. So if you want to use a derivative method, you have to calculate the derivatives of the error function, which in return involves the derivatives of the transfer or activation function.
Back propagation is just a nice algorithm to calculate the derivatives, and nothing more.
Yes, the question was really good, this question was also came in my head while i am understanding the Backpropagation. After doing ForwordPropagation on neural network we do back propagation in network to minimize the total error. And there also many other way to minimize the error.your question is why we are doing derivative in backpropagation, the reason is that, As we all know the meaning of derivative is to find the slope of a function or in other words we can find change of particular thing with respect to particular thing. So here we are doing derivative to minimize the total error with respect to the corresponding weights of the network.
and here by doing the derivation of total error with respect to weights we can find it's slope or in other words we can find what is the change in total error with respect to the small change of the weight, so that we can update the weight to minimize the error with the help of this Gradient Descent formula, that is, Weight= weight-Alpha*(del(Total error)/del(weight)).Or in other words New Weights = Old Weights - learning-rate x Partial derivatives of loss function w.r.t. parameters.
Here Alpha is the learning rate which is control the weight update, means if the derivative the - ve than Alpha make it +ve(Becouse of -Alpha in formula) and if +ve it's remain +ve so that weight update goes in +ve direction and it's reflected to minimize the Total error.And also the as derivative part is multiples with Alpha, it's decrees the step size of Alpha when the weight converge to the optimal value of weight(minimum error). Thats why we are doing derivative to minimize the error.

neural networks and back propagation, justification for removeconstantrows in MATLAB

I was wondering, MATLAB has a removeconstantrows function that should be applied to feedforward neural network input and target output data. This function removes constant rows from the data. For example if one input vector for a 5-input neural network is [1 1 1 1 1] then it is removed.
Googling, the best explanation I could find is that (paraphrasing) "constant rows are not needed and can be replaced by appropriate adjustments to the biases of the output layer".
Can someone elaborate?
Who does this adjustment?
From my book, the weight adjustment for simple gradient descent is:
Δ weight_i = learning_rate * local_gradient * input_i
Which means that all weights of a neuron at the first hidden layer are adjusted the same amount. But they ARE adjusted.
I think there is a misundertanding. The "row" is not an input pattern, but a feature, that is i-th component in all patterns. It's obvious that if some feature does not have big variance on all data set, it does not provide valuable information and does not play a noticable role for network training.
The comparison to a bias is feasible (though I don't agree, that this applies to output layer (only), bacause it depends on where the constant row is found - if it's in input data, then it is right as well for the first hidden layer, imho). If you remeber, it's recommended for each neuron in backpropagation network to have a special bias weight, connected to 1 constant signal. If, for example, a training set contains a row with all 1-th, then this is the same as additional bias. If the constant row has a different value, then the bias will have different effect, but in any case you can simply eliminate this row, and add the constant value of the row into the existing bias.
Disclaimer: I'm not a Matlab user. My background in neural networks comes solely from programming area.