I am trying to classify portions of time series data using a feed forward neural network using 20 neurons in a single hidden layer, and 3 outputs corresponding to the 3 events I would like to be able to recognize. There are many other things that I could classify in the data (obviously), but I don't really care about them for the time being. Neural network creation and training has been performed using Matlab's neural network toolbox for pattern recognition, as this is a classification problem.
In order to do this I am sequentially populating a moving window, then inputting the window into the neural network. The issue I have is that I am obviously not able to classify and train every possible shape the time series takes on. Due to this, I typically get windows filled with data that look very different from the windows I used to train the neural network, but still get outputs near 1.
Essentially, the 3 things I trained the ANN with are windows of 20 different data sets that correspond to shapes that would correspond to steady state, a curve that starts with a negative slope and levels off to 0 slope (essentially the left half side of a parabola that opens upwards), and a curve corresponding to 0 slope that quickly declines (right half side of a parabola that opens downwards).
Am I incorrect in thinking that if I input data that doesn't correspond to any of the items I trained the ANN with it should output values near 0 for all outputs?
Or is it likely due to the fact that these basically cover all the bases of steady state, increasing and decreasing, despite large differences in slope, and therefore something is always classified?
I guess I just need a nudge in the right direction.
Neural network output values
A neural network may not guarantee specific output values if these input values / expected output values were presented during the training period.
A neural network will not consistently output 0 for untrained input values.
A solution is to simply present the network with an array of input values that should result in the network outputting 0.
Related
In my undergrad thesis I am creating a neural network to control automated shifting algorithm for a vehicle.
I have created the nn from scratch starting from .m script which works correctly. I tested it to recognize some shapes.
A brief background information;
NN rewires neurons which are mathematical blocks located in a layer. There are multiple layers. output of a layer is input of preceding layer. Actual output is subtracted from known output and error is obtained by this manner. By using back propagation algorithm which are some algebraic equation the coefficient of neurons are updated.
What I want to do is;
in code there are 6 input matrices, don't have to be matrix just anything and corresponding outputs. lets call them as x(i) matrices and y(i) vectors. In for loop I go through each matrix and vector to teach the network. Finally by using last known updated coeffs networks give some responses according to unknown input.
I couldn't find the way that, how to simulate that for loop in simulink to go through each different input and output pairs. When the network is done with one pair it should change the input and compare with corresponding output then update the coefficient matrices.
I model the layers as given and just fed with one input but I need multiple.
When it comes to automatic transmission control issue it should do all this real time. It should continuously read the output and updates the coeffs and gives the decision.
Check out the "For each Subsystem". Exists since 2011b
To create the input signals you use the "Concatenate" Block which would have six inputs in your case, and a three dimensional output x.dim = [1x20x6] then you could iterate over the third dimension...
A very useful pattern to create smaller models that run faster and to keep your code DRY (Dont repeat yourself)
How is using simulated annealing in conjunction with a feed-forward neural network different than simply resetting the weights (and placing the hidden layer into a new error valley) when a local minimum is reached? Is simulated annealing used by the FFNN as a more systematic way of moving the weights around to find a global minimum, and hence only one iteration is performed each time the validation error begins to increase relative to the training error... slowly moving the current position across the error function? In this case, the simulated annealing is independent of the feed-forward network and the the feed-forward network is dependent on the simulated annealing output. If not, and the simulated annealing is directly dependent on results from the FFNN, I don't see how the simulated annealing trainer would receive this information in terms of how to update its own weights (if that makes sense). One of the examples mentions a cycle (multiple iterations), which doesn't fit into my first assumption.
I have looked at different exmaples, where network.fromArray() and network.toArray() are used, but I only see network.encodeToArray() and network.decodeFromArray(). What is the most current way (v3.2) to transfer weights from one type of network to another? Is this the same for using genetic algorithms, etc?
Neural network training algorithms, such as simulated annealing are essentially searches. The weights of the neural network are essentially vector coordinates that specify a location in a high dimension space.
Consider hill-climbing, possibly the most simple training algorithm. You adjust one weight, thus moving in one dimension and see if it improves your score. If the score is improved, then great, stay there and try a different dimension next iteration. If your score is NOT improved, retreat and try a different dimension next time. Think of a human looking at every point they can reach in one step and choosing the step that increases their altitude the most. If no step will increase altitude (you are standing in the middle of a valley), then your stuck. This is a local minimum.
Simulated annealing adds one critical component to hill-climbing. We might move to a lesser a worse location. (not greedy) The probability that we will move to a lesser location is determined by the decreasing temperature.
If you look inside of the NeuralSimulatedAnnealing classes you will see calls to NetworkCODEC.NetworkToArray() and NetworkCODEC.ArrayToNetwork(). These are how the weight vector is directly updated.
In my project, one of my objectives is to find outliers in aeronautical engine data and chose to use the Replicator Neural Network to do so and read the following report on it (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.12.3366&rep=rep1&type=pdf) and am having a slight understanding issue with the step-wise function (page 4, figure 3) and the prediction values due to it.
The explanation of a replicator neural network is best described in the above report but as a background the replicator neural network I have built works by having the same number of outputs as inputs and having 3 hidden layers with the following activation functions:
Hidden layer 1 = tanh sigmoid S1(θ) = tanh,
Hidden layer 2 = step-wise, S2(θ) = 1/2 + 1/(2(k − 1)) {summation each variable j} tanh[a3(θ −j/N)]
Hidden Layer 3 = tanh sigmoid S1(θ) = tanh,
Output Layer 4 = normal sigmoid S3(θ) = 1/1+e^-θ
I have implemented the algorithm and it seems to be training (since the mean squared error decreases steadily during training). The only thing I don't understand is how the predictions are made when the middle layer with the step-wise activation function is applied since it causes the 3 middle nodes' activations to be become specific discrete values (e.g. my last activations on the 3 middle were 1.0, -1.0, 2.0 ) , this causes these values to be forward propagated and me getting very similar or exactly the same predictions every time.
The section in the report on page 3-4 best describes the algorithm but i have no idea what i have to do to fix this, i don't have much time either :(
Any help would be greatly appreciated.
Thank you
I'm facing the problem of implementing this algorithm and here is my insight into the problem that you might have had: The middle layer, by utilizing a step-wise function, is essentially performing clustering on the data. Each layer transforms the data into a discrete number which could be interpreted as a coordinate in a grid system. Imagine we use two neurons in the middle layer with step-wise values ranging from -2 to +2 in increments of 1. This way we define a 5x5 grid where each set of features will be placed. The more steps you allow, the more grids. The more grids, the more "clusters" you have.
This all sounds good and all. After all, we are compressing the data into a smaller (dimensional) representation which then is used to try to reconstruct into the original input.
This step-wise function, however, has a big problem on itself: back-propagation does not work (in theory) with step-wise functions. You can find more about this in this paper. In this last paper they suggest switching the step-wise function with a ramp-like function. That is, to have almost an infinite amount of clusters.
Your problem might be directly related to this. Try switching the step-wise function with a ramp-wise one and measure how the error changes throughout the learning phase.
By the way, do you have any of this code available anywhere for other researchers to use?
I am having some issues with using neural network. I am using a non linear activation function for the hidden layer and a linear function for the output layer. Adding more neurons in the hidden layer should have increased the capability of the NN and made it fit to the training data more/have less error on training data.
However, I am seeing a different phenomena. Adding more neurons is decreasing the accuracy of the neural network even on the training set.
Here is the graph of the mean absolute error with increasing number of neurons. The accuracy on the training data is decreasing. What could be the cause of this?
Is it that the nntool that I am using of matlab splits the data randomly into training,test and validation set for checking generalization instead of using cross validation.
Also I could see lots of -ve output values adding neurons while my targets are supposed to be positives. Could it be another issues?
I am not able to explain the behavior of NN here. Any suggestions? Here is the link to my data consisting of the covariates and targets
https://www.dropbox.com/s/0wcj2y6x6jd2vzm/data.mat
I am unfamiliar with nntool but I would suspect that your problem is related to the selection of your initial weights. Poor initial weight selection can lead to very slow convergence or failure to converge at all.
For instance, notice that as the number of neurons in the hidden layer increases, the number of inputs to each neuron in the visible layer also increases (one for each hidden unit). Say you are using a logit in your hidden layer (always positive) and pick your initial weights from the random uniform distribution between a fixed interval. Then as the number of hidden units increases, the inputs to each neuron in the visible layer will also increase because there are more incoming connections. With a very large number of hidden units, your initial solution may become very large and result in poor convergence.
Of course, how this all behaves depends on your activation functions and the distributio of the data and how it is normalized. I would recommend looking at Efficient Backprop by Yann LeCun for some excellent advice on normalizing your data and selecting initial weights and activation functions.
I used ntstool to create NAR (nonlinear Autoregressive) net object, by training on a 1x1247 input vector. (daily stock price for 6 years)
I have finished all the steps and saved the resulting net object to workspace.
Now I am clueless on how to use this object to predict the y(t) for example t = 2000, (I trained the model for t = 1:1247)
In some other threads, people recommended to use sim(net, t) function - however this will give me the same result for any value of t. (same with net(t) function)
I am not familiar with the specific neural net commands, but I think you are approaching this problem in the wrong way. Typically you want to model the evolution in time. You do this by specifying a certain window, say 3 months.
What you are training now is a single input vector, which has no information about evolution in time. The reason you always get the same prediction is because you only used a single point for training (even though it is 1247 dimensional, it is still 1 point).
You probably want to make input vectors of this nature (for simplicity, assume you are working with months):
[month1 month2; month2 month 3; month3 month4]
This example contains 2 training points with the evolution of 3 months. Note that they overlap.
Use the Network
After the network is trained and validated, the network object can be used to calculate the network response to any input. For example, if you want to find the network response to the fifth input vector in the building data set, you can use the following
a = net(houseInputs(:,5))
a =
34.3922
If you try this command, your output might be different, depending on the state of your random number generator when the network was initialized. Below, the network object is called to calculate the outputs for a concurrent set of all the input vectors in the housing data set. This is the batch mode form of simulation, in which all the input vectors are placed in one matrix. This is much more efficient than presenting the vectors one at a time.
a = net(houseInputs);
Each time a neural network is trained, can result in a different solution due to different initial weight and bias values and different divisions of data into training, validation, and test sets. As a result, different neural networks trained on the same problem can give different outputs for the same input. To ensure that a neural network of good accuracy has been found, retrain several times.
There are several other techniques for improving upon initial solutions if higher accuracy is desired. For more information, see Improve Neural Network Generalization and Avoid Overfitting.
strong text