The basics of neural networks, as I understand them, is there are several inputs, weights and outputs. There can be hidden layers that add to the complexity of the whole thing.
If I have 100 inputs, 5 hidden layers and one output (yes or no), presumably, there will be a LOT of connections. Somewhere on the order of 100^5. To do back propagation via gradient descent seems like it will take a VERY long time.
How can I set up the back propagation in a way that is parallel (concurrent) to take advantage of multicore processors (or multiple processors).
This is a language agnostic question because I am simply trying to understand structure.
If you have 5 hidden layers (assuming with 100 nodes each) you have 5 * 100^2 weights (assuming the bias node is included in the 100 nodes), not 100^5 (because there are 100^2 weights between two consecutive layers).
If you use gradient descent, you'll have to calculate the contribution of each training sample to the gradient, so a natural way of distributing this across cores would be to spread the training sample across the cores and sum the contributions to the gradient in the end.
With backpropagation, you can use batch backpropagation (accumulate weight changes from several training samples before updating the weights, see e.g. https://stackoverflow.com/a/11415434/288875 ).
I would think that the first option is much more cache friendly (updates need to be merged only once between processors in each step).
Related
I'm calculating the accumulated distance between each pair of kernel inside a nn.Conv2d layer. However for large layers it runs out of memory using a Titan X with 12gb of memory. I'd like to know if it is possible to divide such calculations across two gpus.
The code follows:
def ac_distance(layer):
total = 0
for p in layer.weight:
for q in layer.weight:
total += distance(p,q)
return total
Where layer is instance of nn.Conv2d and distance returns the sum of the differences between p and q. I can't detach the graph, however, for I need it later on. I tried wrapping my model around a nn.DataParallel, but all calculations in ac_distance are done using only 1 gpu, however it trains using both.
Parallelism while training neural networks can be achieved in two ways.
Data Parallelism - Split a large batch into two and do the same set of operations but individually on two different GPUs respectively
Model Parallelism - Split the computations and run them on different GPUs
As you have asked in the question, you would like to split the calculation which falls into the second category. There are no out-of-the-box ways to achieve model parallelism. PyTorch provides primitives for parallel processing using the torch.distributed package. This tutorial comprehensively goes through the details of the package and you can cook up an approach to achieve model parallelism that you need.
However, model parallelism can be very complex to achieve. The general way is to do data parallelism with either torch.nn.DataParallel or torch.nn.DistributedDataParallel. In both the methods, you would run the same model on two different GPUs, however one huge batch would be split into two smaller chunks. The gradients will be accumulated on a single GPU and optimization happens. Optimization takes place on a single GPU in Dataparallel and parallely across GPUs in DistributedDataParallel by using multiprocessing.
In your case, if you use DataParallel, the computation would still take place on two different GPUs. If you notice imbalance in GPU usage it could be because of the way DataParallel has been designed. You can try using DistributedDataParallel which is the fastest way to train on multiple GPUs according to the docs.
There are other ways to process very large batches too. This article goes through them in detail and I'm sure it would be helpful. Few important points:
Do gradient accumulation for larger batches
Use DataParallel
If that doesn't suffice, go with DistributedDataParallel
Im new with NN and i have this problem:
I have a dataset with 300 rows and 33 columns. Each row has 3 more columns for the results.
Im trying to use MLP for trainning a model so that when i have a new row, it estimates those 3 result columns.
I can easily reduce the error during trainning to 0.001 but when i use cross validation it keep estimating very poorly.
It estimates correctly if i use the same entry it used to train, but if i use another values that werent used for trainning the results are very wrong
Im using two hidden layers with 20 neurons each, so my architecture is [33 20 20 3]
For activation function im using biporlarsigmoid function.
Do you guys have some suggestion on where i could try to change to improve this?
Overfitting
As mentioned in the comments, this perfectly describes overfitting.
I strongly suggest reading the wikipedia article on overfitting, as it well describes causes, but I'll summarize some key points here.
Model complexity
Overfitting often happens when you model is needlessly complex for the problem. I don't know anything about your dataset, but I'm guessing [33 20 20 3] is more parameters than necessary for predicting.
Try running your cross-validation methods again, this time with either fewer layers, or fewer nodes per layer. Right now you are using 33*20 + 20*20 + 20*3 = 1120 parameters (weights) to make your prediction, is this necessary?
Regularization
A common solution to overfitting is regularization. The driving principle is KISS (keep it simple, stupid).
By applying an L1 regularizer to your weights, you keep preference for the smallest number of weights to solve your problem. The network will pull many weights to 0 as they aren't need.
By applying an L2 regularizer to your weights, you keep preference for lower rank solutions to your problem. This means that your network will prefer weights matrices that span lower dimensions. Practically this means your weights will be smaller numbers, and are less likely to be able to "memorize" the data.
What is L1 and L2? These are types of vector norms. L1 is the sum of the absolute value of your weights. L2 is the sqrt of the sum of squares of your weights. (L3 is the cubed root of the sum of cubes of weights, L4 ...).
Distortions
Another commonly used technique is to augment your training data with distorted versions of your training samples. This only makes sense with certain types of data. For instance images can be rotated, scaled, shifted, add gaussian noise, etc. without dramatically changing the content of the image.
By adding distortions, your network will no longer memorize your data, but will also learn when things look similar to your data. The number 1 rotated 2 degrees still looks like a 1, so the network should be able to learn from both of these.
Only you know your data. If this is something that can be done with your data (even just adding a little gaussian noise to each feature), then maybe this is worth looking into. But do not use this blindly without considering the implications it may have on your dataset.
Careful analysis of data
I put this last because it is an indirect response to the overfitting problem. Check your data before pumping it through a black-box algorithm (like a neural network). Here are a few questions worth answering if your network doesn't work:
Are any of my features strongly correlated with each other?
How do baseline algorithms perform? (Linear regression, logistic regression, etc.)
How are my training samples distributed among classes? Do I have 298 samples of one class and 1 sample of the other two?
How similar are my samples within a class? Maybe I have 100 samples for this class, but all of them are the same (or nearly the same).
I'm trying to navigate an agent in a n*n gridworld domain by using Q-Learning + a feedforward neural network as a q-function approximator. Basically the agent should find the best/shortest way to reach a certain terminal goal position (+10 reward). Every step the agent takes it gets -1 reward. In the gridworld there are also some positions the agent should avoid (-10 reward, terminal states,too).
So far I implemented a Q-learning algorithm, that saves all Q-values in a Q-table and the agent performs well.
In the next step, I want to replace the Q-table by a neural network, trained online after every step of the agent. I tried a feedforward NN with one hidden layer and four outputs, representing the Q-values for the possible actions in the gridworld (north,south,east, west).
As input I used a nxn zero-matrix, that has a "1" at the current positions of the agent.
To reach my goal I tried to solve the problem from the ground up:
Explore the gridworld with standard Q-Learning and use the Q-map as training data for the Network once Q-Learning is finished
--> worked fine
Use Q-Learning and provide the updates of the Q-map as trainingdata
for NN (batchSize = 1)
--> worked good
Replacy the Q-Map completely by the NN. (This is the point, when it gets interesting!)
-> FIRST MAP: 4 x 4
As described above, I have 16 "discrete" Inputs, 4 Output and it works fine with 8 neurons(relu) in the hidden layer (learning rate: 0.05). I used a greedy policy with an epsilon, that reduces from 1 to 0.1 within 60 episodes.
The test scenario is shown here. Performance is compared beetween standard qlearning with q-map and "neural" qlearning (in this case i used 8 neurons and differnt dropOut rates).
To sum it up: Neural Q-learning works good for small grids, also the performance is okay and reliable.
-> Bigger MAP: 10 x 10
Now I tried to use the neural network for bigger maps.
At first I tried this simple case.
In my case the neural net looks as following: 100 input; 4 Outputs; about 30 neurons(relu) in one hidden layer; again I used a decreasing exploring factor for greedy policy; over 200 episodes the learning rate decreases from 0.1 to 0.015 to increase stability.
At frist I had problems with convergence and interpolation between single positions caused by the discrete input vector.
To solve this I added some neighbour positions to the vector with values depending on thier distance to the current position. This improved the learning a lot and the policy got better. Performance with 24 neurons is seen in the picture above.
Summary: the simple case is solved by the network, but only with a lot of parameter tuning (number of neurons, exploration factor, learning rate) and special input transformation.
Now here are my questions/problems I still haven't solved:
(1) My network is able to solve really simple cases and examples in a 10 x 10 map, but it fails as the problem gets a bit more complex. In cases where failing is very likely, the network has no change to find a correct policy.
I'm open minded for any idea that could improve performace in this cases.
(2) Is there a smarter way to transform the input vector for the network? I'm sure that adding the neighboring positons to the input vector on the one hand improve the interpolation of the q-values over the map, but on the other hand makes it harder to train special/important postions to the network. I already tried standard cartesian two-dimensional input (x/y) on an early stage, but failed.
(3) Is there another network type than feedforward network with backpropagation, that generally produces better results with q-function approximation? Have you seen projects, where a FF-nn performs well with bigger maps?
It's known that Q-Learning + a feedforward neural network as a q-function approximator can fail even in simple problems [Boyan & Moore, 1995].
Rich Sutton has a question in the FAQ of his web site related with this.
A possible explanation is the phenomenok known as interference described in [Barreto & Anderson, 2008]:
Interference happens when the update of one state–action pair changes the Q-values of other pairs, possibly in the wrong direction.
Interference is naturally associated with generalization, and also happens in conventional supervised learning. Nevertheless, in the reinforcement learning paradigm its effects tend to be much more harmful. The reason for this is twofold. First, the combination of interference and bootstrapping can easily become unstable, since the updates are no longer strictly local. The convergence proofs for the algorithms derived from (4) and (5) are based on the fact that these operators are contraction mappings, that is, their successive application results in a sequence converging to a fixed point which is the solution for the Bellman equation [14,36]. When using approximators, however, this asymptotic convergence is lost, [...]
Another source of instability is a consequence of the fact that in on-line reinforcement learning the distribution of the incoming data depends on the current policy. Depending on the dynamics of the system, the agent can remain for some time in a region of the state space which is not representative of the entire domain. In this situation, the learning algorithm may allocate excessive resources of the function approximator to represent that region, possibly “forgetting” the previous stored information.
One way to alleviate the interference problem is to use a local function approximator. The more independent each basis function is from each other, the less severe this problem is (in the limit, one has one basis function for each state, which corresponds to the lookup-table case) [86]. A class of local functions that have been widely used for approximation is the radial basis functions (RBFs) [52].
So, in your kind of problem (n*n gridworld), an RBF neural network should produce better results.
References
Boyan, J. A. & Moore, A. W. (1995) Generalization in reinforcement learning: Safely approximating the value function. NIPS-7. San Mateo, CA: Morgan Kaufmann.
André da Motta Salles Barreto & Charles W. Anderson (2008) Restricted gradient-descent algorithm for value-function approximation in reinforcement learning, Artificial Intelligence 172 (2008) 454–482
I'm very new to neural networks but I am trying to create one for optical character recognition. I have 100 images of every number from 0-9 in the size of 24x14. The number of inputs for the neural network are 336 but I don't know how to get the number of hidden neurons and output neurons.
How do I calculate it?
While for the output neurons the number should be equal to the number of classes you want to discriminate, for the hidden layer, the size is not so straight forward to set and it is mainly dependent on the trade-off between complexity of the model and generalization capabilities (see https://en.wikipedia.org/wiki/Artificial_neural_network#Computational_power).
The answers to this question can help:
training feedforward neural network for OCR
The number of output neurons is simply your number of classes (unless you only have 2 classes and are not using the one-hot representation, in which case you can make do with just 2 output neuron).
The number of hidden layers, and subsequently number of hidden neurons is not as straightforward as you might think as a beginner. Every problem will have a different configuration that will work for it. You have to try multiple things out. Just keep this in mind though:
The more layers you add, the more complex your calculations become and hence, the slower your network will train.
One of the best and easiest practices is to keep the number of hidden neurons fixed in each layer.
Keep in mind what hidden neurons in each layer mean. The input layer is your starting features and each subsequent hidden layer is what you do with those features.
Think about your problem and the features you are using. If you are dealing with images, you might want a large number of neurons in your first hidden layer to break apart your features into smaller units.
Usually you results would not vary much when you increase the number of neurons to a certain extent. And you'll get used to this as you practice more. Just keep in mind the trade-offs you are making
Good luck :)
I'm trying to test the efficiency of the Neural Networks as approximation functions.
The function I need to approximate has 5 inputs and 1 output, which structure should I use?
I have no idea on what criteria should be applied in order to decide the number of Hidden Layer and the number of Nodes for each layer.
Thank you in advance,
Regards
Giuseppe.
I always use a single hidden layer. Theoretically, there are no functions which can be approximated by 2 or more hidden layers that cannot be approximated with one. To make a single hidden layer more complex, add more hidden nodes.
Typically, the number of hidden nodes is varied to observe the effect on model performance (as measured by accuracy or whatever). Too few hidden nodes results in a worse fit due to underfitting (the neural network's output function is too simple, and misses important details in the data). Too many hidden nodes results in a worse fit due to overfitting (the neural network becomes so flexible that it chases every bit of noise in the data).
Note that for classification problems you need at least 2 hidden layers if you want to separate concave polygons.
I'm not sure how the number of hidden layers affects function approximation.