I am working on a independent project. I am studying chemistry in school, along with computer science and would like to know if it is possible to model certain wave function phenomenon (schroedinger's equation, hamiltonians, eigenvalues) using Artificial Neural Networks.
My main questions are:
Would I be able to program and compute from my laptop? My laptop is a Asus Q200e
If not possible from laptop would I be able to use my desktop which contains a i5 processor and a fast GPU?
Your questions
Yes, may use your Asus Q200e to calculate your neural network.
Using a more powerful computer is always appreciative. If are willing to go the extra mile and perform the calculations on your GPU, the process will be even faster.
Applying neural networks to quatum mechanics
There is actually some litterature on how to proceed with creating such neural networks. See this link for to get a few pointers:
Artificial neural network methods in quantum mechanics
Related
I'm working on a virtual machine on a remote server and I want to train a neural network on it but I don't have GPUs to use in this VM. is it possible to train the net on this VM using CPU only? and if that the case, does it work with a large dataset or that will be a problem?
I have used Tensorflow for training a deep neural network. I have used it with GPU and CPU only. The rest of my response is in context of Tensorflow.
Please be aware that Convolution Neural Nets are generally more resource hungry than standard regular feed forward neural networks because CNNs deal with much higher dimensional data. If you are not working deep CNNs then you may be all right to use CPU to and restrict to smaller datasets.
In my scenario, initially I was training with CPU only and then moved on to GPU mode because of speed improvements.
Example of speed
I was able to train the entire MNIST with when using GPU in under 15 minutes. Training on CPU was much slower but you can still learn by cutting down the on the size of the training data set.
Tensorflow with GPU
https://www.tensorflow.org/install/gpu
You will need to go through all the installation steps. This involves not only installing Tensorflow but also CUDA libraries.
What is CUDA?
CUDA is the specification developed by NVIDIA for programming with GPU. They provide their native libraries which talk to the underlying hardware.
https://docs.nvidia.com/cuda/
How to use TensorFlow GPU?
I have finished two neural network courses and done loads of reading on the subject. I am comfortable with Tensorflow and Keras and building advanced neural networks (multiple inputs, large data, special layers...). I also have a fairly deep understanding of the underlying mathematics.
My problem is that I know how to build neural networks but don't know the process by which an "expert" would create one for a specific application.
I can:
Collect loads of data and clean it up.
Train the neural network.
Fine tune hyper parameters.
Export it for actual applications.
What I am missing is how to come up with the layers in the neural network (how wide, what kind...). I know it is somewhat trial and error and looking at what has worked for others. But there must be a process that people can use to come up with architectures* that actually work very well. For example state of the art neural networks.
I am looking for a free resource that would help me understand this process of creating a very good architecture*.
*by architecture I mean the different layers that make up the network and their properties
I wrote my masters thesis about the topic:
Thoma, Martin. "Analysis and Optimization of Convolutional Neural Network Architectures." arXiv preprint arXiv:1707.09725 (2017).
Long story short: There are a couple of techniques for analysis (chapter 2.5) and algorithms that learn topoligies (chapter 3), but in practice it is mostly trial and error / gut feeling.
I've been reading about feed forward Artificial Neural Networks (ANN), and normally they need training to modify their weights in order to achieve the desired output. They will also always produce the same output when receiving the same input once tuned (biological networks don't necessarily).
Then I started reading about evolving neural networks. However, the evolution usually involves recombining two parents genomes into a new genome, there is no "learning" but really recombining and verifying through a fitness test.
I was thinking, the human brain manages it's own connections. It creates connections, strengthens some, and weakens others.
Is there a neural network topology that allows for this? Where the neural network, once having a bad reaction, either adjusts it's weights accordingly, and possibly creates random new connections (I'm not sure how the brain creates new connections, but even if I didn't, a random mutation chance of creating a new connection could alleviate this). A good reaction would strengthen those connections.
I believe this type of topology is known as a Turing Type B Neural Network, but I haven't seen any coded examples or papers on it.
This paper, An Adaptive Spiking Neural Network with Hebbian Learning, specifically addresses the creation of new neurons and synapses. From the introduction:
Traditional rate-based neural networks and the newer spiking neural networks have been shown to be very effective for some tasks, but they have problems with long term learning and "catastrophic forgetting." Once a network is trained to perform some task, it is difficult to adapt it to new applications. To do this properly, one can mimic processes that occur in the human brain: neurogenesis and synaptogenesis, or the birth and death of both neurons and synapses. To be effective, however, this must be accomplished while maintaining the current memories.
If you do some searching on google with the keywords 'neurogenesis artificial neural networks', or similar, you will find more articles. There is also this similar question at cogsci.stackexchange.com.
neat networks as well as cascading add their own connections/neurons to solve problems by building structures to create specific responses to stimuli
I am planning to use neural networks for approximating a value function in a reinforcement learning algorithm. I want to do that to introduce some generalization and flexibility on how I represent states and actions.
Now, it looks to me that neural networks are the right tool to do that, however I have limited visibility here since I am not an AI expert. In particular, it seems that neural networks are being replaced by other technologies these days, e.g. support vector machines, but I am unsure if this is a fashion matter or if there is some real limitation in neural networks that could doom my approach. Do you have any suggestion?
Thanks,
Tunnuz
It's true that neural networks are no longer in vogue, as they once were, but they're hardly dead. The general reason for them falling from favor was the rise of the Support Vector Machine, because they converge globally and require fewer parameter specifications.
However, SVMs are very burdensome to implement and don't naturally generalize to reinforcement learning like ANNs do (SVMs are primarily used for offline decision problems).
I'd suggest you stick to ANNs if your task seems suitable to one, as within the realm of reinforcement learning, ANNs are still at the forefront in performance.
Here's a great place to start; just check out the section titled "Temporal Difference Learning" as that's the standard way ANNs solve reinforcement learning problems.
One caveat though: the recent trend in machine learning is to use many diverse learning agents together via bagging or boosting. While I haven't seen this as much in reinforcement learning, I'm sure employing this strategy would still be much more powerful than an ANN alone. But unless you really need world class performance (this is what won the netflix competition), I'd steer clear of this extremely complex technique.
It seems to me that neural networks are kind of making a comeback. For example, this year there were a bunch of papers at ICML 2011 on neural networks. I would definitely not consider them abandonware. That being said, I would not use them for reinforcement learning.
Neural networks are a decent general way of approximating complex functions, but they are rarely the best choice for any specific learning task. They are difficult to design, slow to converge, and get stuck in local minima.
If you have no experience with neural networks, then you might be happier to you use a more straightforward method of generalizing RL, such as coarse coding.
Theoretically it has been proved that Neural Networks can approximate any function (given an infinite number of hidden neurons and the necessary inputs), so no I don't think the neural networks will ever be abandonwares.
SVM are great, but they cannot be used for all applications while Neural Networks can be used for any purpose.
Using neural networks in combination with reinforcement learning is standard and well-known, but be careful to plot and debug your neural network's convergence to check that it works correctly as neural networks are notoriously known to be hard to implement and learn correctly.
Be also very careful about the representation of the problem you give to your neural network (ie: the inputs nodes): could you, or could an expert, solve the problem given what you give as inputs to your net? Very often, people implementing neural networks don't give enough informations for the neural net to reason, this is not so uncommon, so be careful with that.
I am writing a report, and I would like to know, in your opinion, which open source physical simulation methods (like Molecular Dynamics, Brownian Dynamics, etc) and not ported yet, would be worth to port to GPU or another special hardware that can potentially speedup the calculation.
Links to the projects would be really appreciated.
Thanks in advance
Any physical simulation technique, be it finite difference, finite element, or boundary element, could benefit from a port to GPU. Same for Monte Carlo simulations of financial models. Anything that could use that smoking floating point processing, really.
I am currently working on quantum chemistry application on GPU. as far as I am aware, quantum chemistry is one of most demanding areas, in terms of total cpu time. there has been a number of papers regarding GPU and quantum chemistry, you can research those.
As far as methods, all of them are open source. are you asking about particular program? Then you can look at pyquante or mpqc. for molecular dynamics, look at hoomd. you can also Google QCD on GPU.