Training on a fitness function - neural-network

I am looking at FANN (Fast Artificial Neural Network) to create a neural network to drive a car around a track in a simulation.
It's possible to train on a fixed data set, but this isn't suitable for training a car to drive. I would like to use a fitness function to train my NN. Is this possible?
Is it possible to tell FANN to use a fitness function rather than a fixed data set to train a NN?

I would like to use a fitness function to train my NN. Is this possible?
Fitness functions judge efficiency (to label- or select from generated data); not a function of the network itself.
Is it possible to tell FANN to use a fitness function rather than a fixed data set to train a NN?
fann_train adjusts weights per individual set using FANN_TRAIN_INCREMENTAL.

Related

Can a single input single output neural network with y=x as activation function reflect non-linear behavior?

I am currently learning a little bit about neural networks. One question I can't really get behind is about how neural networks reflect non-linear behavior. From my understanding there is no possibility to reflect non-linear behavior inside a compact set using a neural network.
For example if I would take the function from this question:
y = x^2
and I would use a neural network with a single input and single output the best the neural network could do for each compact set [x0...xn] is a linear function spanning from one end of the set to the other, as at the end all calculations inside the net are linear.
Do I have some misunderstanding about this concept?
The ANN's capabilties to model non-linear behaviour arise from the (usually) non-linear activation function.
If the activation function is linear, then the process of training the network is just another way to create a linear (or multi-linear) fit of input and output data.
Activation function in neural networks is exactly the part, that brings non-linearity. If you use linear activation function, then you cannot train non-linear model (thus fit quadratic or other non-linear functions).
The part, I guess, you are interested in is Universal Approximation Theorem, which says that any continuous function can be approximated with a neural network with a single hidden layer (some assumptions on activation function are applied thou). Take into account, that this theorem does not say anything on optimization of such a network (it does not guarantee you can train such a network with a specific algorithm, but only that such a network exists). Also it does not say anything on the number of neurons you should use.
You can check following links, to get more details:
Original proof with sigmoid activation function: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.441.7873&rep=rep1&type=pdf
And a more friendly derivation: http://mcneela.github.io/machine_learning/2017/03/21/Universal-Approximation-Theorem.html

Can a neural network be trained with just a single class of training data?

I just want to know if a neural network can be trained with a single class of data set. I have a set of data that I want to train a neural network with. After training it, I want to give new data(for testing) to the trained neural network to check if it can recognize it as been similar to the training sample or not.
Is this possible with neural network? If yes, will that be a supervised learning or unsupervised.
I know neural networks can be used for classification if there are multiple classes but I have not seen with a single class before. A good explanation and link to any example will be much appreciated. Thanks
Of course it can be. But in this case it will only recognize this one class that you have trained it with. And depending on the expected output you can measure the similarity to the training data.
An NN, after training, is just a function. For classification problems you can imagine it as a function that takes data as input and returns an integer indicating to which class it belongs to. That being said, if you have only one class that can be represented by an integer value 1, and if training data is not similar to that class, you will get something like 1.555; It will not tel you that it belongs to another class, because you have introduced only one, but it will definitely give you a hint about its similarity.
NNs are considered to be supervised learning, because before training you have to provide both input and target, i. e. the expected output.
If you train a network with only a single class of data then It is popularly known as One-class Classification. There are various algorithms developed in the past like One-class SVM, Support Vector Data Description, OCKELM etc. Tax and Duin developed a MATLAB toolbox for this and it supports various one-class classifiers.
DD Toolbox
One-class SVM
Kernel Ridge Regression based or Kernelized ELM based or LSSVM(where bias=0) based One-class Classification
There is a paper Anomaly Detection Using One-Class Neural Networks
which combines One-Class SVM and Neural Networks.
Here is source code. However, I've had difficulty connecting the source code and the paper.

How to simulate neural network by changing different parameters after training in MATLAB?

I have trained the neural network for a particular time series in MATLAB. After that I have saved the network. So if I want to simulate the network using different parameters like changing the number of neurons,number of hidden layer, transfer functions, learning ratio,momentum coefficient, Can I do it without again training the network?
If not what is the criteria to select the best parameter for my neural network?
How should I configure my neural network in MATLAB to do all these?
No because you save whole model to file, with including weights + activation function and whole structure (layers). You can train few networks, and save to file if you want to check in future on real data (validation data) which networks is better.
Check this also ;) http://people.cs.umass.edu/~btaylor/publications/PSI000008.pdf

What is cost function in neural network?

Could someone please explain to me why it is so important the cost function in a neural network, what is its purpose?
Note: I'm just introducing me to the subject of neural networks, but failed to understand it perfectly.
In artificial neural networks, the cost function to return a number
representing how well the neural network performed to map training
examples to correct output.
See here and here
In other words, after you train a neural network, you have a math model that was trained to adjust its weights to get a better result. The weights and the activation function of each neuron results in a main function, which is the neural network. It is a cost function and its propose is to be adjusted (training step) to produce better results.
Cost function returns a scalar value called 'cost' , that tells how good or bad your model is. There are several cost functions that can be used. Less cost represent a good model. The reason cost functions are used in neural networks is that 'cost is used by models to improve'

In Matlab, How to use already trained neural network on real time values?

Using nntool(Neural Network Manager) in Matlab, we have created a neural network named network1, the network type is Feed Forward backprop. Training function is TRAINLM, learning function is LEARNGDM, performance function is MSE. No. of layers are 2 and transfer function is TRANSIG. No. of Inputs is 2.
We have trained it using known datasets.
Now, we want to use this trained Neural Network on real time values(dynamically one by one) to get the output.
We are unable to use the network on real time values.
So, please guide us through the steps to use trained neural network on real time values.
if you created a ann via
network1 = feedforwardnet;
or something of that kind and then trained it with your known data, you should be able to use said net via
outputs = network1(inputs);
You can create a function from the neural network that you have trained and use it as regular MATLAB functions.
You can either create it with genFun command or using the GUI in neural network toolbox.
genFunction(net,pathname)
If you want the function to accept only matrix elements you should use this command:
genFunction(net,pathname,'MatrixOnly','yes')