Are Autoencoder lighter than MLPs? - autoencoder

I want to know if Autoencoders are computationally lighter than other neural networks such as MLP with the same number of neurons. I have read in some papers that Autoencoders train the network faster, and I could check it in python, the training part is faster than a MLP. However, all these papers mention it briefly. I would like to know if there are some papers that explain it in more detail, because I could not find them.
Thank you in advance.

Related

Deep Learning Neural Networks for Time Series Prediction

I'm starting a work on Internet traffic prediction (time series prediction) using artificial neural networks, but I have few experience with the matter.
Does anyone knows which method is the best for that? (which type
of neural network to use for time series prediction)
Is Deep Learning with unsupervised training a good idea for time
series learning?
You can do time-series prediction with neural nets, but it can get pretty tricky.
1) The obvious choice is a recurrent neural network (RNN). However, these can be really difficult to train, and I would not recommend RNNs if this is your first time using neural nets. Recently there has been some interesting work on easing the training of RNNs (e.g. Hessian-free optimization), but again - it's probably not for beginners ;-) Alternatively, you could try a scheme where you use a standard neural net (i.e. not a RNN), and try to predict the next frame of data from the previous? That might work.
2) This question is too general, there is no categorical right answer. Yes, you can use unsupervised feature learning as part of your solution (e.g. pre-training your model), but if your end goal is time-series prediction you will need to do some supervised learning too.
Good luck!

OCR and Neuron Network?

I am trying to code an OCR for shop tickets (in Java), I have good results with image dictionary distance, but not for skewed texts or bad scans.
I heard that neuronal networks are perfect for this.
Question: which type of neuronal network do you recommand for shop tickets character detection ?
Thks
Neural networks will not magically solve the problem for you. They will have similar problems that your current approach has. Most likely you will have to detect skew and correct it before sending it to a classifier.
Similarly with bad scans. It depends what exactly a bad scan is. For example, some neural networks are amazingly efficient at correcting blurs (unfocused image, blur by move, ...).
Have a look at some papers about OCR and neural networks. It is a classical topic so there are many. For example The Anatomy of Bangla OCR System for Printed Texts Using Back Propagation Neural Network also tries to solve the problem of skewed images before running a neural network.
I know that recurrent neural networks can be used for OCR. Even a very simple one will easily recognize simple characters. There is a recent paper that improves upon them: High-Performance OCR forPrinted English and Fraktur using LSTM Networks. They even include text-line normalization which may be very useful in your case.
Notice that there is an answer here about training a normal Feed-forward backpropagation neural network for OCR too: training feedforward neural network for OCR
"Convolutional Neural Networks" with "Deep Learning" have been shown to give some of the best results in OCR (specifically on the MNIST database).
A good starting point is this tutorial.

Best method to implement text classification (2 classes)

I have to write classifier for corpus of texts, which should separate all my texts into 2 classes.
The corpus is very large (near 4 millions for test, and 50000 for study).
But, what algorithm should I choose?
Naive Bayesian
Neural networks
SVM
Random forest
kNN (why not?)
I heard that Random forests and SVM is state-of-the-art methods, but, maybe someone
has a deal with listed above algorithms, and knows, which is fastest and which more accurate?
As a 2-classes text classifier, I don't think you need:
(1) KNN: it is a clustering method rather than classification, and it is slow;
(2) Random forest: the decision trees may not be a good option in high sparse dimensions;
You can try:
(1) naive bayesian: most straightforward and easiest to code. Proved to work well in text classification problems;
(2) logistic regression: works well if your training sample number is much larger than the feature number;
(3) SVM: again, for training sample much more than features, SVM with linear kernel works as well as logistic regression. And it is also one of the top algorithms in text classification;
(4) Neural network: seems like a panacea in machine learning. In theory it can learn any models that SVM/logistic regression could. The problem is there are not so many packages on NN as there are in SVM. As a result, the optimization process for neural network is time-consuming.
Yet it is hard to say which algorithm is best suit for your case. If you are using python, scikit-learn includes almost all these algorithms for you to test. Besides, weka, which integrates many machine learning algorithms in a user friendly graphic interface, is also a good candidate for you to better know the performance of each algorithm.

Neuroph Vs Encog

I have decided to use a feed-forward NN with back-propagation training for my OCR application for Handwritten text and the input layer is going to be with 32*32 (1024) neurones and at least 8-12 out put neurones.
I found Neuroph easy to use by reading some articles at the same time Encog is few times better in performance. Considering the parameters in my scenario which API is the most suitable one. And I appreciate if u can comment on the number of input nodes i have taken, is it too large value (Although it is out of the topic)
First my disclaimer, I am one of the main developers on the Encog project. This means I am more familiar with Encog that Neuroph and perhaps biased towards it. In my opinion, the relative strengths of each are as follows. Encog supports quite a few interchangeable machine learning methods and training methods. Neuroph is VERY focused on neural networks and you can express a connection between just about anything. So if you are going to create very custom/non-standard (research) neural networks of different typologies than the typical Elman/Jordan, NEAT, HyperNEAT, Feedforward type networks, then Neuroph will fit the bill nicely.

Are neural networks really abandonware?

I am planning to use neural networks for approximating a value function in a reinforcement learning algorithm. I want to do that to introduce some generalization and flexibility on how I represent states and actions.
Now, it looks to me that neural networks are the right tool to do that, however I have limited visibility here since I am not an AI expert. In particular, it seems that neural networks are being replaced by other technologies these days, e.g. support vector machines, but I am unsure if this is a fashion matter or if there is some real limitation in neural networks that could doom my approach. Do you have any suggestion?
Thanks,
Tunnuz
It's true that neural networks are no longer in vogue, as they once were, but they're hardly dead. The general reason for them falling from favor was the rise of the Support Vector Machine, because they converge globally and require fewer parameter specifications.
However, SVMs are very burdensome to implement and don't naturally generalize to reinforcement learning like ANNs do (SVMs are primarily used for offline decision problems).
I'd suggest you stick to ANNs if your task seems suitable to one, as within the realm of reinforcement learning, ANNs are still at the forefront in performance.
Here's a great place to start; just check out the section titled "Temporal Difference Learning" as that's the standard way ANNs solve reinforcement learning problems.
One caveat though: the recent trend in machine learning is to use many diverse learning agents together via bagging or boosting. While I haven't seen this as much in reinforcement learning, I'm sure employing this strategy would still be much more powerful than an ANN alone. But unless you really need world class performance (this is what won the netflix competition), I'd steer clear of this extremely complex technique.
It seems to me that neural networks are kind of making a comeback. For example, this year there were a bunch of papers at ICML 2011 on neural networks. I would definitely not consider them abandonware. That being said, I would not use them for reinforcement learning.
Neural networks are a decent general way of approximating complex functions, but they are rarely the best choice for any specific learning task. They are difficult to design, slow to converge, and get stuck in local minima.
If you have no experience with neural networks, then you might be happier to you use a more straightforward method of generalizing RL, such as coarse coding.
Theoretically it has been proved that Neural Networks can approximate any function (given an infinite number of hidden neurons and the necessary inputs), so no I don't think the neural networks will ever be abandonwares.
SVM are great, but they cannot be used for all applications while Neural Networks can be used for any purpose.
Using neural networks in combination with reinforcement learning is standard and well-known, but be careful to plot and debug your neural network's convergence to check that it works correctly as neural networks are notoriously known to be hard to implement and learn correctly.
Be also very careful about the representation of the problem you give to your neural network (ie: the inputs nodes): could you, or could an expert, solve the problem given what you give as inputs to your net? Very often, people implementing neural networks don't give enough informations for the neural net to reason, this is not so uncommon, so be careful with that.