I have a few questions about training a neural network using reinforcement learning, for example, DQN:
1. Should we use regularizers or dropouts when defining our model?
2. What we can monitor during the learning phase?
There isn't really a universal answer to this question. It really depends on your environment and your approach and best thing to to do would be to test with and without and to compare results.
You could always start by monitoring your network loss and some environment performance metric per episode (if your environment is some game, you can monitor your score per episode)
Related
I have finished two neural network courses and done loads of reading on the subject. I am comfortable with Tensorflow and Keras and building advanced neural networks (multiple inputs, large data, special layers...). I also have a fairly deep understanding of the underlying mathematics.
My problem is that I know how to build neural networks but don't know the process by which an "expert" would create one for a specific application.
I can:
Collect loads of data and clean it up.
Train the neural network.
Fine tune hyper parameters.
Export it for actual applications.
What I am missing is how to come up with the layers in the neural network (how wide, what kind...). I know it is somewhat trial and error and looking at what has worked for others. But there must be a process that people can use to come up with architectures* that actually work very well. For example state of the art neural networks.
I am looking for a free resource that would help me understand this process of creating a very good architecture*.
*by architecture I mean the different layers that make up the network and their properties
I wrote my masters thesis about the topic:
Thoma, Martin. "Analysis and Optimization of Convolutional Neural Network Architectures." arXiv preprint arXiv:1707.09725 (2017).
Long story short: There are a couple of techniques for analysis (chapter 2.5) and algorithms that learn topoligies (chapter 3), but in practice it is mostly trial and error / gut feeling.
I have a question on backpropagation algorithm which is used in Deep Learning.
How should I update the weights when we have n training samples?
Should I update the weights for each sample and then again update it by the next sample?
Or I should get the average of them and then use the average?
Please guide me what is the rational procedure.
Thanks,
Afshin
They are both rational options.
Both approaches are correct. They are respectively called "online" and "offline" learning.
Online learning
Online machine learning is used in the case where the data becomes available in a sequential fashion (excerpt of the definition on Wikipedia).
Offline learning
Offline or "batch" learning may be used when one has access to the entire training dataset at once. An advantage of using batch learning is the improved immunity to local optima, but this comes at the cost of increased cost of training the net (the network often requires additional backpropagation iterations).
I'm starting a work on Internet traffic prediction (time series prediction) using artificial neural networks, but I have few experience with the matter.
Does anyone knows which method is the best for that? (which type
of neural network to use for time series prediction)
Is Deep Learning with unsupervised training a good idea for time
series learning?
You can do time-series prediction with neural nets, but it can get pretty tricky.
1) The obvious choice is a recurrent neural network (RNN). However, these can be really difficult to train, and I would not recommend RNNs if this is your first time using neural nets. Recently there has been some interesting work on easing the training of RNNs (e.g. Hessian-free optimization), but again - it's probably not for beginners ;-) Alternatively, you could try a scheme where you use a standard neural net (i.e. not a RNN), and try to predict the next frame of data from the previous? That might work.
2) This question is too general, there is no categorical right answer. Yes, you can use unsupervised feature learning as part of your solution (e.g. pre-training your model), but if your end goal is time-series prediction you will need to do some supervised learning too.
Good luck!
I am working on a independent project. I am studying chemistry in school, along with computer science and would like to know if it is possible to model certain wave function phenomenon (schroedinger's equation, hamiltonians, eigenvalues) using Artificial Neural Networks.
My main questions are:
Would I be able to program and compute from my laptop? My laptop is a Asus Q200e
If not possible from laptop would I be able to use my desktop which contains a i5 processor and a fast GPU?
Your questions
Yes, may use your Asus Q200e to calculate your neural network.
Using a more powerful computer is always appreciative. If are willing to go the extra mile and perform the calculations on your GPU, the process will be even faster.
Applying neural networks to quatum mechanics
There is actually some litterature on how to proceed with creating such neural networks. See this link for to get a few pointers:
Artificial neural network methods in quantum mechanics
I am planning to use neural networks for approximating a value function in a reinforcement learning algorithm. I want to do that to introduce some generalization and flexibility on how I represent states and actions.
Now, it looks to me that neural networks are the right tool to do that, however I have limited visibility here since I am not an AI expert. In particular, it seems that neural networks are being replaced by other technologies these days, e.g. support vector machines, but I am unsure if this is a fashion matter or if there is some real limitation in neural networks that could doom my approach. Do you have any suggestion?
Thanks,
Tunnuz
It's true that neural networks are no longer in vogue, as they once were, but they're hardly dead. The general reason for them falling from favor was the rise of the Support Vector Machine, because they converge globally and require fewer parameter specifications.
However, SVMs are very burdensome to implement and don't naturally generalize to reinforcement learning like ANNs do (SVMs are primarily used for offline decision problems).
I'd suggest you stick to ANNs if your task seems suitable to one, as within the realm of reinforcement learning, ANNs are still at the forefront in performance.
Here's a great place to start; just check out the section titled "Temporal Difference Learning" as that's the standard way ANNs solve reinforcement learning problems.
One caveat though: the recent trend in machine learning is to use many diverse learning agents together via bagging or boosting. While I haven't seen this as much in reinforcement learning, I'm sure employing this strategy would still be much more powerful than an ANN alone. But unless you really need world class performance (this is what won the netflix competition), I'd steer clear of this extremely complex technique.
It seems to me that neural networks are kind of making a comeback. For example, this year there were a bunch of papers at ICML 2011 on neural networks. I would definitely not consider them abandonware. That being said, I would not use them for reinforcement learning.
Neural networks are a decent general way of approximating complex functions, but they are rarely the best choice for any specific learning task. They are difficult to design, slow to converge, and get stuck in local minima.
If you have no experience with neural networks, then you might be happier to you use a more straightforward method of generalizing RL, such as coarse coding.
Theoretically it has been proved that Neural Networks can approximate any function (given an infinite number of hidden neurons and the necessary inputs), so no I don't think the neural networks will ever be abandonwares.
SVM are great, but they cannot be used for all applications while Neural Networks can be used for any purpose.
Using neural networks in combination with reinforcement learning is standard and well-known, but be careful to plot and debug your neural network's convergence to check that it works correctly as neural networks are notoriously known to be hard to implement and learn correctly.
Be also very careful about the representation of the problem you give to your neural network (ie: the inputs nodes): could you, or could an expert, solve the problem given what you give as inputs to your net? Very often, people implementing neural networks don't give enough informations for the neural net to reason, this is not so uncommon, so be careful with that.