Explanation of Diagram of Matlab Neural Network - matlab

My neural network looks like this
but I'm a bit confused by this diagram.
Clearly we have 10 input values and 2 output values.
There are also 10 hidden neurons. So I assume each of the 10 inputs are connected to each of the 10 hidden neurons?
Also what do the Ws and Bs mean?

I have yet to find a description of this diagram in Matlab's documentation, but it is a simplification of the diagram shown here. You have one hidden layer with neurons which are each connected to all the inputs. Note that the number of neurons won't always be the same as the number of inputs, which are both 10 here. W=weights, b=biases. There is a nice intro here.

Related

In neural networks, do bias terms have a weight?

I am trying to code my own neural networks, but there is something I don't understand about bias terms. I know each link of a neuron to another neuron has a weight, but does a link between a bias and the neuron its connected to also have a weight? Or can I think of the weight always 1 and never gets changed?
Thanks
The bias terms do have weights, and typically, you add bias to every neuron in the hidden layers as well as the neurons in the output layer (prior to squashing).
Have a look at the basic structure of Artificial Neurons, you see the bias is added as wk0 = bk. For more thorough examples, see e.g. this link, containing formulas as well as visualisation of multi-layered NN.
For the discussion of choice of weights, refer to the following stats.stackexchange thread:
https://stats.stackexchange.com/questions/47590/what-are-good-initial-weights-in-a-neural-network

What do P letters mean in neural network layer scheme?

In Wikipedia article about MNIST database it is said, that lowest error rate is of "committee of 35 convolutional networks" with the scheme:
1-20-P-40-P-150-10
What does this scheme mean?
Numbers are probably neuron numbers. But what does 1 mean then?
What do P letters mean?
In this particular scheme, 'P' means 'pooling' layer.
So, basic structure is following:
One grayscale input image
20 images after convolution layer (20 different filters)
Pooling layer
40 outputs from next convolution
Pooling layer
150... can be either 150 small convolution outputs or just fully-connected 150 neurons
10 output fully-connected neurons
That's why 1-20-P-40-P-150-10. Not best notation, but still pretty clear if you familiar with CNN.
You can read more details about internal structure of CNN in base article of Yann LeCun "Gradient-Based Learning Applied to Document Recognition".

Back propagation with a simple ANN

I watched a lecture and derived equations for back propagation, but it was in a simple example with 3 neurons: an input neuron, one hidden neuron, and an output neuron. This was easy to derive, but how would I do the same with more neurons? I'm not talking about adding more layers, I'm just talking about adding more neurons to the already existing three layers: the input, hidden, and output layer.
My first guess would be to use the equations I've derived for the network with just 3 neurons and 3 layers and iterate across all possible paths to each of the output neurons in the larger network, updating each weight. However, this would cause certain weights to be updated more than once. Can I just do this or is there a better method?
If you want to larn more about backpropagation I recommend you to read this link from Standford University http://cs231n.github.io/optimization-2/, it will really help you to understand backprop and all the math underneath.

classify the units in Deep learning for image classification

Suppose we have a database with 10 classes, and we do classification test on it by Deep Belief Network or Convolutional Neural Network. The question is that how we can understand which neurons in the last layer are related to which object?
In one of the post, a person wrote " to understand which neurons are for an object like shoes and which ones are not you will put that all units in the last layer to another supervised classifier(this can be anything like multi-class-SVM or a soft-max-layer). I do not know how it should be done? I do need more expansion.
If you have 10 classes, make your last layer have 10 neurons and use the softmax activation function. This will make sure that they all lie between 0 and 1 and add up to 1. Then, simply use the index of the neuron with the largest value as your output class.
You can look into class activation maps which does something similar to what you are asking for. Here is an insightful blog post explaining CAMs

ANN bypassing hidden layer for an input

I have just been set an assignment to calculate some ANN outputs and write an ANN. Simple stuff, done it before, so I don't need any help with general ANN stuff. However, there is something that is puzzling me. In the assignment, the topology is as follows(wont upload the diagram as it is his intellectual property):-
2 layers, 3 hiddens and one output.
Input x1 goes to 2 hidden nodes and the output node.
Input x2 goes to 2 hidden nodes.
The problem is the ever so usual XOR. He has NOT mentioned anything about this kind of topology before, and I have definitely attended each lecture and listened intently. I am a good student like that :)
I don't think this counts as homework as I need no help with the actual tasks in hand.
Any insight as to why one would use a network with a topology like this would be brilliant.
Regards
Does the neural net look like the above picture? It looks like a common XOR topology with one hidden layer and a bias neuron. The bias neuron basically helps you shift the values of the activation function to the left or the right.
For more information on the role of the bias neuron, take a look at the following answers:
Role of Bias in Neural Networks
XOR problem solvable with 2x2x1 neural network without bias?
Why is a bias neuron necessary for a backpropagating neural network that recognizes the XOR operator?
Update
I was able to find some literature about this. Apparently it is possible for an input to skip the hidden layer and go to the output layer. This is called a skip layer and is used to model traditional linear regression in a neural network. This page from the book Neural Network Modeling Using Sas Enterprise Miner describes the concept. This page from the same book goes into a little more detail about the concept as well.