Training a neural network using CPU only - neural-network

I'm working on a virtual machine on a remote server and I want to train a neural network on it but I don't have GPUs to use in this VM. is it possible to train the net on this VM using CPU only? and if that the case, does it work with a large dataset or that will be a problem?

I have used Tensorflow for training a deep neural network. I have used it with GPU and CPU only. The rest of my response is in context of Tensorflow.
Please be aware that Convolution Neural Nets are generally more resource hungry than standard regular feed forward neural networks because CNNs deal with much higher dimensional data. If you are not working deep CNNs then you may be all right to use CPU to and restrict to smaller datasets.
In my scenario, initially I was training with CPU only and then moved on to GPU mode because of speed improvements.
Example of speed
I was able to train the entire MNIST with when using GPU in under 15 minutes. Training on CPU was much slower but you can still learn by cutting down the on the size of the training data set.
Tensorflow with GPU
https://www.tensorflow.org/install/gpu
You will need to go through all the installation steps. This involves not only installing Tensorflow but also CUDA libraries.
What is CUDA?
CUDA is the specification developed by NVIDIA for programming with GPU. They provide their native libraries which talk to the underlying hardware.
https://docs.nvidia.com/cuda/
How to use TensorFlow GPU?

Related

torch7 : how to connect the neurons of the same layer?

Is it possible to implement, using torch, an architecture that connects the neurons of the same layer?
What you describe is called a recurrent neural network. Note that it needs quite different type of structure, input data, and training algorithms to work well.
There is the rnn library for Torch to work with recurrent neural networks.
Yes, it's possible. Torch has everything that other languages have: logical operations, reading/writing operations, array operations. That's all what needed for implementing any kind of neural network. If to take into account that torch has usage of CUDA you can even implement neural network which can work faster then some C# or java implementations. Performance improvement can depend from number of if/else during one iteration

How to speed up GPU mode convolutional neural network with theano?

I'm using theano to implement a convolution neural network. My CPU RAM is 32G and GPU RAM is 2G, but the data is also very big -- almost 5G training data.
When the program is running, the computer seems to be frozen and each operation is really slow, even didn't respond. And the CPU mode seems to be at least 2x faster than GPU mode.
Is there any way to speed up the GPU convolutional neural network?
Make sure to use Theano 0.7 with cudnn, this speed up convolution heavily:
http://deeplearning.net/software/theano/library/sandbox/cuda/dnn.html
In order to use GPU accelleration first thing you need to install CUDA.
On the level of Theano configuration(Theano flags/TheanoRC) there are few ways you can speed-up your model with GPU:
Specify usage of GPU "device = gpu"
Enable Cuda memory allocation (CnMem) "cnmem = 0.75"
Enable CUDNN optimization "optimizer = cudnn"
You can read more about Theano config here

Ok to use Java to model quantum mechanical behavior with ANNs?

I am working on a independent project. I am studying chemistry in school, along with computer science and would like to know if it is possible to model certain wave function phenomenon (schroedinger's equation, hamiltonians, eigenvalues) using Artificial Neural Networks.
My main questions are:
Would I be able to program and compute from my laptop? My laptop is a Asus Q200e
If not possible from laptop would I be able to use my desktop which contains a i5 processor and a fast GPU?
Your questions
Yes, may use your Asus Q200e to calculate your neural network.
Using a more powerful computer is always appreciative. If are willing to go the extra mile and perform the calculations on your GPU, the process will be even faster.
Applying neural networks to quatum mechanics
There is actually some litterature on how to proceed with creating such neural networks. See this link for to get a few pointers:
Artificial neural network methods in quantum mechanics

Accelerating MATLAB code using GPUs?

AccelerEyes announced in December 2012 that it works with Mathworks on the GPU code and has discontinued its product Jacket for MATLAB:
http://blog.accelereyes.com/blog/2012/12/12/exciting-updates-from-accelereyes/
Unfortunately they do not sell Jacket licences anymore.
As far as I understand, the Jacket GPU Array solution based on ArrayFire was much faster than the gpuArray solution provided by MATLAB.
I started working with gpuArray, but I see that many functions are implemented poorly. For example a simple
myArray(:) = 0
is very slow. I have written some custom CUDA-Kernels, but the poorly-implemented standard MATLAB functionality adds a lot of overhead, even if working with gpuArrays consistently throughout the code. I fixed some issues by replacing MATLAB code with hand written CUDA code - but I do not want to reimplement the MATLAB standard functionality.
Another feature I am missing is sparse GPU matrices.
So my questions are:
How do is speed up the badly implemented default GPU implementations provided by MATLAB? In particular, how do I speed up sparse matrix operations in MATLAB using the GPU?
MATLAB does support CUDA based GPU. You have to access it from the "Parallel Computing Toolbox". Hope these 2 links also help:
Parallel Computing Toolbox Features
Key Features
Parallel for-loops (parfor) for running task-parallel algorithms on multiple processors
Support for CUDA-enabled NVIDIA GPUs
Full use of multicore processors on the desktop via workers that run locally
Computer cluster and grid support (with MATLAB Distributed Computing Server)
Interactive and batch execution of parallel applications
Distributed arrays and single program multiple data (spmd) construct for large dataset handling and data-parallel algorithms
MATLAB GPU Computing Support for NVIDIA CUDA-Enabled GPUs
Using MATLAB for GPU computing lets you accelerate your applications with GPUs more easily than by using C or Fortran. With the familiar MATLAB language you an take advantage of the CUDA GPU computing technology without having to learn the intricacies of GPU architectures or low-level GPU computing libraries.
You can use GPUs with MATLAB through Parallel Computing Toolbox, which supports:
CUDA-enabled NVIDIA GPUs with compute capability 2.0 or higher. For releases 14a and earlier, compute capability 1.3 is sufficient.
GPU use directly from MATLAB
GPU-enabled MATLAB functions such as fft, filter, and several linear algebra operations
GPU-enabled functions in toolboxes: Image Processing Toolbox, Communications System Toolbox, Statistics and Machine Learning Toolbox, Neural Network Toolbox, Phased Array Systems Toolbox, and Signal Processing Toolbox (Learn more about GPU support for signal processing algorithms)
CUDA kernel integration in MATLAB applications, using only a single line of MATLAB code
Multiple GPUs on the desktop and computer clusters using MATLAB workers in Parallel Computing Toolbox and MATLAB Distributed Computing Server
I had the pleasure of attending a talk by John, the founder of AccelerEyes. They did not get the speedup because they just removed poorly written code and replaced it with code that saved a few bits here and there. Their speedup was mostly from exploiting the availability of cache and doing a lot of operations in-memory (GPU's). Matlab relied on transferring data between GPU and CPU, if I remember correctly, and hence the speedup was crazy.

GPU perfomance request, what's the best solution?

I work on an audio processing project that needs to do a lot of basic computations (+, -, *) like a FFT (Fast Fourier Transform) calculation.
We're considering using a graphics card to accelerate these computations. But we don't know if this is the best solution. Our desired solution needs to be a good computation system costing less than $500.
We use Matlab programming, and we have a sound card acquisition which have to be plug in the system.
Do you know a solution other than graphics card + motherboard to do lot of calculus?
You can use the free Matlab CUDA library to perform the computations on the GPU. $500 will give you a very decent NVIDIA GPU. Beware that GPU's have limited video memory and will run out of memory with large data volumes even faster than Matlab.
I have benchmarked an 8core intel CPU against an 8800 Nvidia GPU (128streams) with GPUMat , for 512Kb datasets the GPU spun out at the same speed as the 8 core intel at 2Ghz, including transfer times to the GPU memory. For serious GPU work I recommend a dedicated card compared to the one you are using to drive the monitor. Use the motherboard cheapie intel video to drive the monitor and pass the array computes to the Nvidia.
Parallel Computing Toolbox from MathWorks now includes GPU support. In particular, elementwise operations and arithmetic are supported, as well as 1- and 2-dimensional FFTs (along with a whole bunch of other stuff to support hand-written CUDA code if you have that). If you're interested in performing calculations in double-precision, the recent Tesla and Quadro branded cards will give you the best performance.
Here's a trivial example showing how you might use the GPU in MATLAB using Parallel Computing Toolbox:
gA = gpuArray( rand(1000) );
gB = fft( 1 + gA * 3 );