There are lots of "introduction to neural networks" articles online, but most are an introduction to the math of artificial neural networks and not an introduction to the actual underlying concepts (even though they should be one and the same). How does a simple network of artificial neurons actually work?
This answer is roughly based on the beginning of "Neural Networks and Deep Learning" by M. A. Nielsen which is definitely worth reading - it's online and free.
The fundamental idea behind all neural networks is this: Each neuron in a neural network makes a decision. Once you understand how they do that, everything else will make sense. Let’s walk through a simple situation which will help us arrive at that understanding.
Let’s say you are trying to decide whether or not to wear a hat today. There are a number of factors which will affect your decision, and perhaps the most important ones are:
Is it sunny?
Do I have a hat to wear?
Would a hat suit my outfit?
For simplicity, we’ll assume these are the only three factors that you’re weighing up during this decision. Forgetting about neural networks for a second, let’s just try to build a ‘decision maker’ to help us answer this question.
First, we can see each question has a certain level of importance, and so we’ll need to use this relative importance of each question, along with the corresponding answer to each question, to make our decision.
Secondly, we’ll need to have some component which interprets each (yes or no) answer along with its importance to produce the final answer. This sounds simple enough to put into an equation, right? Let’s do it. We simply decide how important each factor is and multiply that importance (or ‘weight’) by the answer to the question (which can be 0 or 1):
3a + 5b + 2c > 6
The numbers 3, 5 and 2 are the ‘weights’ of question a, b and c, respectively. a, b and c, themselves can be either zero (the answer to the question was ‘no’), or one (the answer to the question was ‘yes’). If the above equation is true, then the decision is to wear a hat, and if it is false, the decision is to not wear a hat. The equation says that we’ll only wear a hat if the sum of our weights multiplied by our factors is greater than some threshold value. Above, I chose a threshold value of 6. If you think about it, this means that if I don’t have a hat to wear (b=0), no matter what the other answers are, I won’t be wearing a hat today. That is,
3a + 2c > 6
is never true, since a and c are only either 0 or 1. This makes sense – our simple decision model tells us not to wear a hat if we don’t have one! So the weights of 3, 5 and 2, and the threshold value of 6 seem like a good choices for our simple “should I wear a hat” decision-maker. It also means that, as long as I have a hat to wear, the sun shining (a=1) OR the hat suiting my outfit (c=1) is enough to make me wear a hat today. That is,
5 + 3 > 6 and 5 + 2 > 6
are both true. Good! You can see that by adjusting the weighting of each factor and the threshold, and by adding more factors, we can adjust our ‘decision maker’ to approximately model any decision-making process. What we have just demonstrated is the functionality of a simple neuron (a decision-maker!). Let’s put the above equation into ‘neuron-form’:
A neuron which processes 3 factors: a, b, c, with corresponding importance weightings of 3, 5, 2, and with a decision threshold of 6.
The neuron has 3 input connections (the factors) and 1 output connection (the decision). Each input connection has a weighting which encodes the importance of that connection. If the weighting of that connection is low (relative to the other weights), then it won’t have much effect on the decision. If it’s high, the decision will heavily depend on it.
This is great, we’ve got a fully working neuron that weights inputs and makes decisions. So here’s the next though: What if the output (our decision) was fed into the input of another neuron? That neuron would be using our decision about our hat to make a more abstract decision. And what if the inputs a, b and c are themselves the outputs of other neurons which compute lower-level decisions? We can see that neural networks can be interpreted as networks which compute decisions about decisions, leading from simple input data to more and more complex ‘meta-decisions’. This, to me, is an incredible concept. All the complexity of even the human brain can be modelled using these principles. From the level of photons interacting with our cone-cells right up to our pondering of the meaning of life, it’s just simple little decision-making neurons.
Below is a diagram of a simple neural network which essentially has 3 layers of abstraction:
A simple neural network with 2 inputs and 2 outputs.
As an example, the above inputs could be 2 infrared distance sensors, and the outputs might control control the on/off switch for 2 motors which drive the wheels of a robot.
In our simple hat example, we could pick the weights and the threshold quite easily, but how do we pick the weights and thresholds in this example so that, say, the robot can follow things that move? And how do we know how many neurons we need to solve this problem? Could we solve it with just 1 neuron, maybe 2? Or do we need 20? And how do we organise them? In layers? Modules? These questions are the questions in the field of neural networks. Techniques such as ‘backpropagation’ and (more recently) ‘neuroevolution’ are used effectively to answer some of these troubling questions, but these are outside the scope of this introduction – Wikipedia and Google Scholar and free online textbooks like “Neural Networks and Deep Learning” by M. A. Nielsen are great places to start learning about these concepts.
Hopefully you now have some intuition for how neural networks work, but if you’re interested in actually implementing a neural network there are a few optimisations and extensions to our concept of a neuron which.will make our neural nets more efficient and effective.
Firstly, notice that if we set the threshold value of the neuron to zero, we can always adjust the weightings of the inputs to account for this – only, we’ll also need to allow negative values for our weights. This is great since it removes one variable from our neuron. So we’ll allow negative weights and from now on we won’t need to worry about setting a threshold – it’ll always be zero.
Next, we’ll notice that the weights of the input connections are all relative to one-another, so we can actually normalise these to a value between -1 and 1. Cool. That simplifies things a little.
We can make a further, more substantial improvement to our decision-maker by realising that the inputs themselves (a, b and c in the above example) need not just be 0 or 1. For example, what if today is really sunny? Or maybe there’s scattered clouds, do it’s intermittently sunny? We can see that by allowing values between 0 and 1, our neuron gets more information and can therefore make a better decision – and the good news is, we don’t need to change anything in our neuron model!
So far, we’ve allowed the neuron to accept inputs between 0 and 1, and we’ve normalised the weights between -1 and 1 for convenience.
The next question is: why do we need such certainty in our final decision (i.e. the output of the neuron)? Why can’t it, like the inputs, also be a value between 0 and 1? If we did allow this, the decision of whether or not to wear a hat would become a level of certainty that wearing a hat is the right choice. But if this is a good idea, why did I introduce a threshold at all? Why not just directly pass on the sum of the weighted inputs to the output connection? Well, because, for reasons beyond the scope of this simple introduction to neural networks, it turns out that a neural network works better if the neurons are allowed to make something like an ‘educated guess’, rather than just presenting a raw probability. A threshold gives the neurons a slight bias toward certainty and allows them to be more ‘assertive’, and doing so makes neural networks more efficient. So in that sense, a threshold is good. But the problem with a threshold is that it doesn’t let us know when the neuron is uncertain about its decision – that is, if the sum of the weighted inputs is very close to the threshold, the neuron makes a definite yes/no answer where a definite yes/no answer is not ideal.
So how can we overcome this problem? Well it turns out that if we replace our “greater than zero” condition with a continuous function (called an ‘activation function’), then we can choose non-binary and non-linear reactions to the neuron’s weighted inputs. Let’s first look at our original “greater than zero” condition as a function:
‘Step’ function representing the original neuron’s ‘activation function’.
In the above activation function, the x-axis represents the sum of the weighted inputs and the y-axis represents the neuron’s output. Notice that even if the inputs sum to 0.01, the output is a very certain 1. This is not ideal, as we’ve explained earlier. So we need another activation function that only has a bias towards certainty. Here’s where we welcome the ‘sigmoid’ function:
The ‘sigmoid’ function; a more effective activation function for our artificial neural networks.
Notice how it looks like a halfway point between a step function (which we established as too certain) and a linear x=y line that we’d expect from a neuron which just outputs the raw probability that some some decision is correct. The equation for this sigmoid function is:
where x is the sum of the weighted inputs.
And that’s it! Our new-and-improved neuron does the following:
Takes multiple inputs between 0 and 1.
Weights each one by a value between -1 and 1.
Sums them all together.
Puts that sum into the sigmoid function.
Outputs the result!
It's deceptively simple, but by combining these simple decision-makers together and finding ideal connection weights, we can make arbitrarily complex decisions and calculations which stretch far beyond what our biological brains allow.
Related
I read in a book, where the author mentioned that the bias bk is used to produce an affine transform to the output uk (The summation of weighted input signals).
Also, the author mentioned that due to this bias that gives a constant value of, say 'k' makes the neuron not connected to the previous layer.
I am in a confused state. Can someone please tell me what the above two points mean, and if there are any other uses of a bias to the network?
Thanks in advance!
If the neurons activation is z(a) = wa + b, b is the bias. It's a bias because the larger it is, the more this neuron is biased, or in other words it doesnt care much about what was passed to it (a) from the last layer. I'm assuming the second point is referring to the fact that if a bias is large enough (positive or negative) it is like the neuron no longer cares what is passed to it, it's always going to pass the same thing to the next layer. I would need to see it in context to be certain about what the author is saying, but overall you just need to understand that it is a constant that can add bias (doesnt care about what the last layer gave it). Dont fret too much about its implications though, because the learning (or optimization) process is going to adjust these automatically so you're not going to have to choose proper bias values for the network. As you become more familiar with the concepts it will start to make more sense
This question already has answers here:
Why use softmax only in the output layer and not in hidden layers?
(5 answers)
Closed 5 years ago.
I have read the answer given here. My exact question pertains to the accepted answer:
Variables independence : a lot of regularization and effort is put to keep your variables independent, uncorrelated and quite sparse. If you use softmax layer as a hidden layer - then you will keep all your nodes (hidden variables) linearly dependent which may result in many problems and poor generalization.
What are the complications that forgoing the variable independence in hidden layers arises? Please provide at least one example. I know hidden variable independence helps a lot in codifying the backpropogation but backpropogation can be codified for softmax as well (Please verify if or not i am correct in this claim. I seem to have gotten the equations right according to me. hence the claim).
Training issue: try to imagine that to make your network working better you have to make a part of activations from your hidden layer a little bit lower. Then - automaticaly you are making rest of them to have mean activation on a higher level which might in fact increase the error and harm your training phase.
I don't understand how you achieve that kind of flexibility even in sigmoid hidden neuron where you can fine tune the activation of a particular given neuron which is precisely what the gradient descent's job is. So why are we even worried about this issue. If you can implement the backprop rest will be taken care of by gradient descent. Fine tuning the weights so as to make the activations proper is not something you, even if you could do, which you cant, would want to do. (Kindly correct me if my understanding is wrong here)
mathematical issue: by creating constrains on activations of your model you decrease the expressive power of your model without any logical explaination. The strive for having all activations the same is not worth it in my opinion.
Kindly explain what is being said here
Batch normalization: I understand this, No issues here
1/2. I don't think you have a clue of what the author is trying to say. Imagine a layer with 3 nodes. 2 of these nodes have an error responsibility of 0 with respect to the output error; so there is óne node that should be adjusted. So if you want to improve the output of node 0, then you immediately affect nodes 1 and 2 in that layer - possibly making the output even more wrong.
Fine tuning the weights so as to make the activations proper is not something you, even if you could do, which you cant, would want to do. (Kindly correct me if my understanding is wrong here)
That is the definition of backpropagation. That is exactly what you want. Neural networks rely on activations (which are non-linear) to map a function.
3. Your basically saying to every neuron 'hey, your output cannot be higher than x, because some other neuron in this layer already has value y'. Because all neurons in a softmax layer should have a total activation of 1, it means that neurons cannot be higher than a specific value. For small layers - small problem, but for big layers - big problem. Imagine a layer with 100 neurons. Now imagine their total output should be 1. The average value of those neurons will be 0.01 -> that means you are making networks connection relying (because activations will stay very low, averagely) - as other activation functions output (or take on input) of range (0:1 / -1:1).
I'm trying to navigate an agent in a n*n gridworld domain by using Q-Learning + a feedforward neural network as a q-function approximator. Basically the agent should find the best/shortest way to reach a certain terminal goal position (+10 reward). Every step the agent takes it gets -1 reward. In the gridworld there are also some positions the agent should avoid (-10 reward, terminal states,too).
So far I implemented a Q-learning algorithm, that saves all Q-values in a Q-table and the agent performs well.
In the next step, I want to replace the Q-table by a neural network, trained online after every step of the agent. I tried a feedforward NN with one hidden layer and four outputs, representing the Q-values for the possible actions in the gridworld (north,south,east, west).
As input I used a nxn zero-matrix, that has a "1" at the current positions of the agent.
To reach my goal I tried to solve the problem from the ground up:
Explore the gridworld with standard Q-Learning and use the Q-map as training data for the Network once Q-Learning is finished
--> worked fine
Use Q-Learning and provide the updates of the Q-map as trainingdata
for NN (batchSize = 1)
--> worked good
Replacy the Q-Map completely by the NN. (This is the point, when it gets interesting!)
-> FIRST MAP: 4 x 4
As described above, I have 16 "discrete" Inputs, 4 Output and it works fine with 8 neurons(relu) in the hidden layer (learning rate: 0.05). I used a greedy policy with an epsilon, that reduces from 1 to 0.1 within 60 episodes.
The test scenario is shown here. Performance is compared beetween standard qlearning with q-map and "neural" qlearning (in this case i used 8 neurons and differnt dropOut rates).
To sum it up: Neural Q-learning works good for small grids, also the performance is okay and reliable.
-> Bigger MAP: 10 x 10
Now I tried to use the neural network for bigger maps.
At first I tried this simple case.
In my case the neural net looks as following: 100 input; 4 Outputs; about 30 neurons(relu) in one hidden layer; again I used a decreasing exploring factor for greedy policy; over 200 episodes the learning rate decreases from 0.1 to 0.015 to increase stability.
At frist I had problems with convergence and interpolation between single positions caused by the discrete input vector.
To solve this I added some neighbour positions to the vector with values depending on thier distance to the current position. This improved the learning a lot and the policy got better. Performance with 24 neurons is seen in the picture above.
Summary: the simple case is solved by the network, but only with a lot of parameter tuning (number of neurons, exploration factor, learning rate) and special input transformation.
Now here are my questions/problems I still haven't solved:
(1) My network is able to solve really simple cases and examples in a 10 x 10 map, but it fails as the problem gets a bit more complex. In cases where failing is very likely, the network has no change to find a correct policy.
I'm open minded for any idea that could improve performace in this cases.
(2) Is there a smarter way to transform the input vector for the network? I'm sure that adding the neighboring positons to the input vector on the one hand improve the interpolation of the q-values over the map, but on the other hand makes it harder to train special/important postions to the network. I already tried standard cartesian two-dimensional input (x/y) on an early stage, but failed.
(3) Is there another network type than feedforward network with backpropagation, that generally produces better results with q-function approximation? Have you seen projects, where a FF-nn performs well with bigger maps?
It's known that Q-Learning + a feedforward neural network as a q-function approximator can fail even in simple problems [Boyan & Moore, 1995].
Rich Sutton has a question in the FAQ of his web site related with this.
A possible explanation is the phenomenok known as interference described in [Barreto & Anderson, 2008]:
Interference happens when the update of one state–action pair changes the Q-values of other pairs, possibly in the wrong direction.
Interference is naturally associated with generalization, and also happens in conventional supervised learning. Nevertheless, in the reinforcement learning paradigm its effects tend to be much more harmful. The reason for this is twofold. First, the combination of interference and bootstrapping can easily become unstable, since the updates are no longer strictly local. The convergence proofs for the algorithms derived from (4) and (5) are based on the fact that these operators are contraction mappings, that is, their successive application results in a sequence converging to a fixed point which is the solution for the Bellman equation [14,36]. When using approximators, however, this asymptotic convergence is lost, [...]
Another source of instability is a consequence of the fact that in on-line reinforcement learning the distribution of the incoming data depends on the current policy. Depending on the dynamics of the system, the agent can remain for some time in a region of the state space which is not representative of the entire domain. In this situation, the learning algorithm may allocate excessive resources of the function approximator to represent that region, possibly “forgetting” the previous stored information.
One way to alleviate the interference problem is to use a local function approximator. The more independent each basis function is from each other, the less severe this problem is (in the limit, one has one basis function for each state, which corresponds to the lookup-table case) [86]. A class of local functions that have been widely used for approximation is the radial basis functions (RBFs) [52].
So, in your kind of problem (n*n gridworld), an RBF neural network should produce better results.
References
Boyan, J. A. & Moore, A. W. (1995) Generalization in reinforcement learning: Safely approximating the value function. NIPS-7. San Mateo, CA: Morgan Kaufmann.
André da Motta Salles Barreto & Charles W. Anderson (2008) Restricted gradient-descent algorithm for value-function approximation in reinforcement learning, Artificial Intelligence 172 (2008) 454–482
Need some confirmation on the statement.
Is two of these equivalent?
1.MLP with sliding time windows
2.Time delay neural network (TDNN)
Can anyone confirm on the given statement? Possibly with reference. Thanks
"Equivalent" is too generalizing but you can roughly say that in terms of architecture (at least regarding their original proposal - there have been more modifications like the MS-TDNN which is even more different from a MLP). The correct phrasing would be that TDNN is an extended MLP architecture [1].
Both use Backpropagation and both are FeedForward nets.
The main idea can probably be phrased like this:
Delaying the inputs of neurons located in a hidden or the output layer
is similar to multiplying the layers beyond and helps with pattern
scaling and translation and is close to integrating the input signal
over time.
What makes it different from the MLP:
However, in order to deal with delayed or scaled input signals, the
original denition of the TDNN required that all (delayed) links of a
neuron that are connected to one input are identical.
This requirement was overthrown in later studies, however, like in [1] where past and present nodes have different weights (which obviously seems reasonable for a number of applications) making it equivalent of a MLP.
That's all regarding architecture comparisons. Let's talk about training. The results will be different: The whole training will differ if you input the same sequential data into an MLP wich only gets current data one-by-one from a sliding window and if you input it with current and past data together into the TDNN. The big difference is context. With the MLP you'll have the context of past inputs in past activations. With the TDNN you'll have them in present activations, directly coupled to your present inputs. Again, MLPs have no temporal context capabilities (this is why recurrent neural networks are much more popular for sequential data) and the TDNN is an attempt to solve that. The way I see it, TDNN is basically an attempt to merge the 2 worlds of MLPs (basic Backprop) and RNNs (context/sequences).
TL;DR: If you strip down the TDNNs purpose you can say your statement holds true on an architectural level. But if you compare both architectures side by side in action you will get different observations.
Here is decription of TDNN taken from Waibel et al 1989 paper. "In our TDNN basic unit is modified by intoducing delays D1 through Dn as shown in Fig. 1. J inputs of such unit now will be multiplied by several weights, one for each delay". This is essentialy MLP with sliding window (see also Fig. 2 there).
Since a lot of these sites found on google use mathematical notation and I have no idea what any of it means I want to make a feedforward neural network like this:
n1
i1 n3
n2 o1
i2 n4
n3
Now can someone explain to me how to find the value of o1? How is it possible to make a neuron active when none of its inputs are active?
If none of the inputs are live, then you won't get anything out of the output.
It's been a long time since I spent some time on this, but back in the day, we'd add noise to the equation. This can be in the form of inputs that are always on or by adding a small random amount to each input before shoving it at the neural network.
Interestingly, the use of noise in neural networks has been shown to have a biological analog. If you're trying to hear something, and you add in a bit of white noise, it makes it easier to hear. same goes for seeing.
As for your initial question - How to find out the value of o1 depends on ...
The formula used throughout the neural network.
The values of n1 to n4.
The inputs.
http://www.cheshireeng.com/Neuralyst/nnbg.htm
Has some basic info on the maths.
Since the question isn't really clear to me... I'll say this in case it's what you're looking for:
Often times a bias neuron is added to the input and hidden layers to allow for the case you're mentioning. This extra neuron is always active and is used to handle the case when all other neurons on the layer are inactive.
This question is a good example of why "neural networks" do such an amazingly poor job of emulating the behavior of real-world neurons. Most real neurons have an intrinsic (or "natural") rate at which they fire action potentials, with no input from pre-synaptic neurons. The effect of pre-synaptic neurons is almost always to speed up or slow down this intrinsic firing rate, not to produce a single action potential in the post-synaptic neuron.
Why don't "neural networks" typically model this phenomenon? I don't know - you'd have to ask the people for whom "the approach inspired by biology has more or less been abandoned for a more practical approach based on statistics and signal processing".