How to declare per neuron connections in a pytorch model? - neural-network

I want to create a brain-alike mess:
We have input tensor I of length n and output tensor O of length p
In between, we have K "intersection" layers
At intersection layers, neurons share their values at that time with a random subset of "close" neurons (+- range c). In the form of w_i*current_neuron_val
and J "creation" layers new neurons are created from a set or ReLued "close" neurons (+- range c). closed neurons do not go into deeper layers.
Can we do such a thing with PyTorch so that such model will be trainable?

You should create your own class based on nn.Module and make realisation for forward and backward manually. I guess those brain-like-mess connections you have to control through feature labeling inside tensor (aka "0" - no connection).
Because all frameworks use tensors - they by default fully connected and another behavior have to be inducation in the code.
see code of bindsNET - how they implement spike-like model using pytorch

Related

Can a convolutional neural network be built with perceptrons?

I was reading this interesting article on convolutional neural networks. It showed this image, explaining that for every receptive field of 5x5 pixels/neurons, a value for a hidden value is calculated.
We can think of max-pooling as a way for the network to ask whether a given feature is found anywhere in a region of the image. It then throws away the exact positional information.
So max-pooling is applied.
With multiple convolutional layers, it looks something like this:
But my question is, this whole architecture could be build with perceptrons, right?
For every convolutional layer, one perceptron is needed, with layers:
input_size = 5x5;
hidden_size = 10; e.g.
output_size = 1;
Then for every receptive field in the original image, the 5x5 area is inputted into a perceptron to output the value of a neuron in the hidden layer. So basically doing this for every receptive field:
So the same perceptron is used 24x24 amount of times to construct the hidden layer, because:
is that we're going to use the same weights and bias for each of the 24×24 hidden neurons.
And this works for the hidden layer to the pooling layer as well, input_size = 2x2; output_size = 1;. And in the case of a max-pool layer, it's just a max() function on an array.
and then finally:
The final layer of connections in the network is a fully-connected
layer. That is, this layer connects every neuron from the max-pooled
layer to every one of the 10 output neurons.
which is a perceptron again.
So my final architecture looks like this:
-> 1 perceptron for every convolutional layer/feature map
-> run this perceptron for every receptive field to create feature map
-> 1 perceptron for every pooling layer
-> run this perceptron for every field in the feature map to create a pooling layer
-> finally input the values of the pooling layer in a regular ALL to ALL perceptron
Or am I overseeing something? Or is this already how they are programmed?
The answer very much depends on what exactly you call a Perceptron. Common options are:
Complete architecture. Then no, simply because it's by definition a different NN.
A model of a single neuron, specifically y = 1 if (w.x + b) > 0 else 0, where x is the input of the neuron, w and b are its trainable parameters and w.b denotes the dot product. Then yes, you can force a bunch of these perceptrons to share weights and call it a CNN. You'll find variants of this idea being used in binary neural networks.
A training algorithm, typically associated with the Perceptron architecture. This would make no sense to the question, because the learning algorithm is in principle orthogonal to the architecture. Though you cannot really use the Perceptron algorithm for anything with hidden layers, which would suggest no as the answer in this case.
Loss function associated with the original Perceptron. This notion of Peceptron is orthogonal to the problem at hand, you're loss function with a CNN is given by whatever you try to do with your whole model. You can eventually use it, but it is non-differentiable, so good luck :-)
A sidenote rant: You can see people refer to feed-forward, fully-connected NNs with hidden layers as "Multilayer Perceptrons" (MLPs). This is a misnomer, there are no Perceptrons in MLPs, see e.g. this discussion on Wikipedia -- unless you go explore some really weird ideas. It would make sense call these networks as Multilayer Linear Logistic Regression, because that's what they used to be composed of. Up till like 6 years ago.

Interpret the output of neural network in matlab

I have build a neural network model, with 3 classes. I understand that the best output for a classification process is the boolean 1 for a class and boolean zeros for the other classes , for example the best classification result for a certain class, where the output of a classifire that lead on how much this data are belong to this class is the first element in a vector is [1 , 0 , 0]. But the output of the testing data will not be like that,instead it will be a rational numbers like [2.4 ,-1 , .6] ,So how to interpret this result? How to decide to which class the testing data belong?
I have tried to take the absolute value and turn the maximum element to 1 and the other to zeros, so is this correct?
Learner.
It appears your neural network is bad designed.
Regardless your structure is -number of input-hidden-output- layers, when you are doing a multiple classification problem, you must ensure each of your output neurones are evaluating an individual class, that is, each them has a bounded output, in this case, between 0 and 1. Use almost any of the defined function on the output layer for performing this.
Nevertheles, for the Neural Network to work properly, you must strongly remember, that every single neuron loop -from input to output- operates as a classificator, this is, they define a region on your input space which is going to be classified.
Under this framework, every single neuron has a direct interpretable sense on the non-linear expansion the NN is defining, particularly when there are few hidden layers. This is ensured by the general expression of Neural Networks:
Y_out=F_n(Y_n-1*w_n-t_n)
...
Y_1=F_0(Y_in-1*w_0-t_0)
For example, with radial basis neurons -i.e. F_n=sqrt(sum(Yni-Rni)^2) and w_n=1 (identity):
Yn+1=sqrt(sum(Yni-Rni)^2)
a dn-dim spherical -being dn the dimension of the n-1 layer- clusters classification is induced from the first layer. Similarly, elliptical clusters are induced. When two radial basis neuron layers are added under that structure of spherical/elliptical clusters, unions and intersections of spherical/elliptical clusters are induced, three layers are unions and intersections of the previous, and so on.
When using linear neurons -i.e. F_n=(.) (identity), linear classificators are induced, that is, the input space is divided by dn-dim hyperplanes, and when adding two layers, union and intersections of hyperplanes are induced, three layers are unions and intersections of the previous, and so on.
Hence, you can realize the number of neurons per layer is the number of classificators per each class. So if the geometry of the space is -lets put this really graphically- two clusters for the class A, one cluster for the class B and three clusters for the class C, you will need at least six neurons per layer. Thus, assuming you could expect anything, you can consider as a very rough approximate, about n neurons per class per layer, that is, n neurons to n^2 minumum neurons per class per layer. This number can be increased or decreased according the topology of the classification.
Finally, the best advice here is for n outputs (classes), r inputs:
Have r good classificator neurons on the first layers, radial or linear, for segmenting the space according your expectations,
Have n to n^2 neurons per layer, or as per the dificulty of your problem,
Have 2-3 layers, only increase this number after getting clear results,
Have n thresholding networks on the last layer, only one layer, as a continuous function from 0 to 1 (make the crisp on the code)
Cheers...

Disconnect some input-hidden layer connections in MLP neural network in MATLAB

I am using Neural Network (NN) wizard in MATLAB for some implementations. Also i can use code-based version of NN in MATLAB which is available after construction NN by wizard (It is clear!).
When we provide our NN with MATLAB, it is a fully connected input-hidden layer. For example, if you have 4 inputs in the input layer and 2 neurons in the hidden layer, we have fully connected relation between 4 inputs and 2 neurons in hidden layer. I am going to manipulate this connections. For example, disconnect 3rd input connection to 1st neuron and 2nd input connection to 2nd neuron in hidden layer. How is it possible by the MATLAB?
Thank you in advance for any guidance.
I read completely documentation of NN in MATLAB. With the following command we can access to each connection and change their weights and bias so that the desired connection gets off duty!
For a NN with one hidden layer:
Network.IW{1,1} = The matrix of Input weights to Hidden layer.
Network.LW{2,1} = The matrix of Hidden layer weights to Output layer.
Network.b{1,1} = The matrix of bias between Input to Hidden layer.
Network.b{2,1} = The matrix of bias between Hidden layer to Output.
Then, we can set 0 to those connections (both weights and bias) between Input and Hidden layer as our desired. With this type of configuration, we can re-construct the neural network infrastructure.
If you want to randomize this switching off and on, of some nodes, you can also use the dropoutLayer in Matlab. This works best for deep NNs.
https://in.mathworks.com/help/deeplearning/ref/nnet.cnn.layer.dropoutlayer.html

Artificial Neural Networks: Choosing initial neurons

How is the initial structure (Neurons and connections between them) chosen? My book only states that we give the connection random weights in the beginning before we train the network.
I think that we would add neurons during the training like this:
Start with a completely empty network
The first value I generate during the training will not exist
Add a neuron to correspond to this value, with a random weight
What you are after is a self-organizing ANN. Usually, the way the connections are organized is man-made into a model that the developer thinks will have sufficient power to perform the computation neccessary. You can of course start with a random selection of nodes with random connections, but the evolution of such a network will probably take a lot longer time than a standard two or three layer network.
So, yes, you are right in that you would use a similar approach when doing a self-organizing network. Keep track of two sets of genetic algorithms, one for the structure and one for the weights (or combine the two in some devious way) and evolve as you please.
I do not believe the question is about self-organising or GA-evolved ANNs. It sounds more like it is about a the most common ANN: a perceptron (single or multi-layer), in which case the structure of the network: the number of layers and the size of the layers, must be hand chosen at the beginning. A simple initial rule of thumb for initialising the weight is simply picking uniformly random values between -1.0 and 1.0.

Matlab neural network simulate up to hidden layer

I have trained a 3-layer (input, hidden and output) feedforward neural network in Matlab. After training, I would like to simulate the trained network with an input test vector and obtain the response of the neurons of the hidden layer (not the final output layer). How can I go about doing this?
Additionally, after training a neural network, is it possible to "cut away" the final output layer and make the current hidden layer as the new output layer (for any future use)?
Extra-info: I'm building an autoencoder network.
The trained weights for a trained network are available in the net.LW property. You can use these weights to get the hidden layer outputs
From Matlab Documentation
nnproperty.net_LW
Neural network LW property.
NET.LW
This property defines the weight matrices of weights going to layers
from other layers. It is always an Nl x Nl cell array, where Nl is the
number of network layers (net.numLayers).
The weight matrix for the weight going to the ith layer from the jth
layer (or a null matrix []) is located at net.LW{i,j} if
net.layerConnect(i,j) is 1 (or 0).
The weight matrix has as many rows as the size of the layer it goes to
(net.layers{i}.size). It has as many columns as the product of the size
of the layer it comes from with the number of delays associated with the
weight:
net.layers{j}.size * length(net.layerWeights{i,j}.delays)
Addition to using input and layer weights and biases, you may add a output connect from desired layer (after training the network). I found it possible and easy but I didn't exam the correctness of it.