I have an optimisation problem where the objective function I want to maximise is not differentiable. I've trained a linear model using genetic algorithm, but the performance the linear model is not that good. I am thinking about replacing the linear model with a neural network. But my understanding is that with a non-differentiable objective function I cannot use the backprop method to do updates.
So, does anyone know how to use the genetic algorithm to train a neural network?
Yes. This is called neuro-evolution. If you are good at programming, you could make your own NEAT (neuroevolution of augmenting topologies) implementation. However, there are already a lot of implementations out there.
If you want to play around with neuroevolution first, you might want to check out Neataptic. All you need to do is set up the network and run a single function to get the neuroevolution started.
Is it possible to design a neuro fuzzy system using Accord.NET. I can't fins anything regarding neuro fuzzy systems in Accord documentation. Has anyone done this before, Is there any technique to do this?
I look at this file: https://github.com/accord-net/framework/blob/development/Sources/Accord.Fuzzy/InferenceSystem.cs
Accord.net seems to support fuzzy inference system (FIS). So now have to figure out how you want to model membership functions (MF) using parameters. Then use special neural network (NN) to model those rules such that as NN is trained MF parameters are tuned to give you best results.
I'm building image classifier that uses DBN for feature learning and logistic regression to fine-tune resulting network. Normally, the most convenient way to implement such an architecture in SciKit Learn is to use Pipeline class. But in my case I have ~10K unlabeled images and only ~300 labeled ones. Surely, I want to use all images to train DBN and fit logistic regression with only labeled examples.
I can think of implementing my own Pipeline class that will handle this case, but first I'd like to know if there's already something existing. Is it?
The current scikit-learn Pipeline API is not well suited for supervised learning with unsupervised pre-training. Implementing your own wrapper class is probably the best way to go forward for that case.
I have decided to use a feed-forward NN with back-propagation training for my OCR application for Handwritten text and the input layer is going to be with 32*32 (1024) neurones and at least 8-12 out put neurones.
I found Neuroph easy to use by reading some articles at the same time Encog is few times better in performance. Considering the parameters in my scenario which API is the most suitable one. And I appreciate if u can comment on the number of input nodes i have taken, is it too large value (Although it is out of the topic)
First my disclaimer, I am one of the main developers on the Encog project. This means I am more familiar with Encog that Neuroph and perhaps biased towards it. In my opinion, the relative strengths of each are as follows. Encog supports quite a few interchangeable machine learning methods and training methods. Neuroph is VERY focused on neural networks and you can express a connection between just about anything. So if you are going to create very custom/non-standard (research) neural networks of different typologies than the typical Elman/Jordan, NEAT, HyperNEAT, Feedforward type networks, then Neuroph will fit the bill nicely.
I am planning to use neural networks for approximating a value function in a reinforcement learning algorithm. I want to do that to introduce some generalization and flexibility on how I represent states and actions.
Now, it looks to me that neural networks are the right tool to do that, however I have limited visibility here since I am not an AI expert. In particular, it seems that neural networks are being replaced by other technologies these days, e.g. support vector machines, but I am unsure if this is a fashion matter or if there is some real limitation in neural networks that could doom my approach. Do you have any suggestion?
Thanks,
Tunnuz
It's true that neural networks are no longer in vogue, as they once were, but they're hardly dead. The general reason for them falling from favor was the rise of the Support Vector Machine, because they converge globally and require fewer parameter specifications.
However, SVMs are very burdensome to implement and don't naturally generalize to reinforcement learning like ANNs do (SVMs are primarily used for offline decision problems).
I'd suggest you stick to ANNs if your task seems suitable to one, as within the realm of reinforcement learning, ANNs are still at the forefront in performance.
Here's a great place to start; just check out the section titled "Temporal Difference Learning" as that's the standard way ANNs solve reinforcement learning problems.
One caveat though: the recent trend in machine learning is to use many diverse learning agents together via bagging or boosting. While I haven't seen this as much in reinforcement learning, I'm sure employing this strategy would still be much more powerful than an ANN alone. But unless you really need world class performance (this is what won the netflix competition), I'd steer clear of this extremely complex technique.
It seems to me that neural networks are kind of making a comeback. For example, this year there were a bunch of papers at ICML 2011 on neural networks. I would definitely not consider them abandonware. That being said, I would not use them for reinforcement learning.
Neural networks are a decent general way of approximating complex functions, but they are rarely the best choice for any specific learning task. They are difficult to design, slow to converge, and get stuck in local minima.
If you have no experience with neural networks, then you might be happier to you use a more straightforward method of generalizing RL, such as coarse coding.
Theoretically it has been proved that Neural Networks can approximate any function (given an infinite number of hidden neurons and the necessary inputs), so no I don't think the neural networks will ever be abandonwares.
SVM are great, but they cannot be used for all applications while Neural Networks can be used for any purpose.
Using neural networks in combination with reinforcement learning is standard and well-known, but be careful to plot and debug your neural network's convergence to check that it works correctly as neural networks are notoriously known to be hard to implement and learn correctly.
Be also very careful about the representation of the problem you give to your neural network (ie: the inputs nodes): could you, or could an expert, solve the problem given what you give as inputs to your net? Very often, people implementing neural networks don't give enough informations for the neural net to reason, this is not so uncommon, so be careful with that.