Neural network model prediction in Keras without using GPU - neural-network

My neural networks model was built in Keras over Theano using a GPU.
I am storing it using Pickle for future use, possibly on another computer.
Is it possible to use the model for prediction without a GPU?

Sure. It's even a common use-case. GPUs help boost training, but sometimes aren't available in production (for example, if you run on a customer's phone).
I don't know Theano much, but they might have an equivalent to tensorflow.serving. You can always serialize the trained Model object and read it from the other machine.
To serialize, you can either use:
The built-in keras.models.save_model and keras.models.load_model that dumps Models to hdf5 files.
If you need/prefer pickle - it's basically not supported by Keras, but you can use this trick - http://zachmoshe.com/2017/04/03/pickling-keras-models.html

Related

Do I need a GPU even to deploy a deep learning model?

I know I do need a GPU to train a model but even after the model is trained do I need a GPU to deploy the same trained model?
For example I have a model for a car with auto-pilot to predict and take a decision... Do I need a GPU for the prediction too..
Specially in case of reinforcement learning
Strictly speaking you usually don't need a GPU for training either depending on the platform, it would just be much slower than if you utilized he GPU rather than the CPU.
For deploying the model you do not need a GPU. Most models are simply an organized list of weights which are used by the model to operate on its inputs. Since this usually isn't particularly computationally expensive, except for very large models, a GPU isn't necessary for deployment either, but may provide some performance benefit for lager models.

Is it possible to simultaneously use and train a neural network?

Is it possible to use Tensorflow or some similar library to make a model that you can efficiently train and use at the same time.
An example/use case for this would be a chat bot that you give feedback to. Somewhat like how pets learn (i.e. replicating what they just did for a reward). Or being able to add new entries or new responses they can use.
I think what you are asking is whether a model can be trained continuously without having to retrain it from scratch each time new labelled data comes in.
Answer to that is - Online models
There are models that can be trained continuously on data without worrying about training them from scratch. As per Wikipedia definition
Online machine learning is a method of machine learning in which data becomes available in sequential order and is used to update the best predictor for future data at each step, as opposed to batch learning techniques which generate the best predictor by learning on the entire training data set at once.
Some examples of such algorithms are
BernoulliNB
GaussianNB
MiniBatchKMeans
MultinomialNB
PassiveAggressiveClassifier
PassiveAggressiveRegressor
Perceptron
SGDClassifier
SGDRegressor
DNNs

Training a neural network using CPU only

I'm working on a virtual machine on a remote server and I want to train a neural network on it but I don't have GPUs to use in this VM. is it possible to train the net on this VM using CPU only? and if that the case, does it work with a large dataset or that will be a problem?
I have used Tensorflow for training a deep neural network. I have used it with GPU and CPU only. The rest of my response is in context of Tensorflow.
Please be aware that Convolution Neural Nets are generally more resource hungry than standard regular feed forward neural networks because CNNs deal with much higher dimensional data. If you are not working deep CNNs then you may be all right to use CPU to and restrict to smaller datasets.
In my scenario, initially I was training with CPU only and then moved on to GPU mode because of speed improvements.
Example of speed
I was able to train the entire MNIST with when using GPU in under 15 minutes. Training on CPU was much slower but you can still learn by cutting down the on the size of the training data set.
Tensorflow with GPU
https://www.tensorflow.org/install/gpu
You will need to go through all the installation steps. This involves not only installing Tensorflow but also CUDA libraries.
What is CUDA?
CUDA is the specification developed by NVIDIA for programming with GPU. They provide their native libraries which talk to the underlying hardware.
https://docs.nvidia.com/cuda/
How to use TensorFlow GPU?

Is it possible to return tensorflow code from compiled Keras model?

I'll start this post by saying that I acknowledge this may not be the appropriate venue for this question, but wasn't sure where else to start. If there is a more appropriate SE channel, please feel free to suggest.
I've been using Keras for learning how to apply neural networks to different prediction problems. I'm interested in learning TensorFlow as a way to gain a deeper understanding of the inner working of these networks. Obviously, it's possible to switch the backend of Keras to TensorFlow and to use Keras as a high-level API to TensorFlow. However, is there a way to "recover" the TensorFlow code from a compiled Keras model? I'm thinking it would be extremely useful to be able to write a model that I'm familiar with in Keras, and automatically see it's "translation" to TensorFlow as a way to learn this library more quickly.
Any thoughts or suggestions would be helpful. Thanks for reading.
All that Keras is doing is to abstract both Theano and TensorFlow into one unified backend module. Then it uses the functions in the backend to implement the layers and methods you are able to use in Keras.
This in turn means that there is no compilation step involved in generating code for one particular backend. Both Theano and TensorFlow are python libraries, there is no reason for a translation step, Keras just uses the library you specify.
The best way to find out how a model in Keras is written in TensorFlow is probably to search for a simple network with the same dataset and compare examples in TensorFlow and Keras. Another way would be to read the Keras code and lookup the K.<function> in the TensorFlow backend module.
If you are interested in the platform specific code that the individual backends produce, e.g. the CUDA code, then the answer is: it depends. Both Theano and TensorFlow use temporary directories to store the code and the sources. For theano this is ~/.theano by default. But looking at this code will probably not make you any wiser in understanding neural networks and their mechanics.

torch7 : how to connect the neurons of the same layer?

Is it possible to implement, using torch, an architecture that connects the neurons of the same layer?
What you describe is called a recurrent neural network. Note that it needs quite different type of structure, input data, and training algorithms to work well.
There is the rnn library for Torch to work with recurrent neural networks.
Yes, it's possible. Torch has everything that other languages have: logical operations, reading/writing operations, array operations. That's all what needed for implementing any kind of neural network. If to take into account that torch has usage of CUDA you can even implement neural network which can work faster then some C# or java implementations. Performance improvement can depend from number of if/else during one iteration