there is a kind of NN that can give importance for some inputs ?
I have a problem like (actualy solved by 2 different NNs):
SITUATION 1)
inputs: 1 0 1 0 1 0 1 : target: 23
SITUATION 2)
inputs: 1 0 1 0 1 0 1 : target: 29
can I use the same NN for the both inputs, using the SITUATION as another INPUT for a single NN ?
One problem of this approach is that I have 50 different SITUATIONS.
Anyone with a good idea ?
Andre
I think your best bet would be adding another 50 input neurons and lighting one of them, signalizing your situation.
To make it smaller, you could use just 6 input neurons and light them up in binary code (situation 13 = 101100 as input for input neurons)
Other solution would be training neural network for each situation and save its weights+biases. Then for solving you would first apply weights+biases corresponding to situation you want to do and then calculate outputs.
Last option i can think of would be creating 50 different neural networks and use one you need.
I think that solution of having additional 6 neurons in binary is the riht way to go.
You can have up to 64 different situations. Adding 7th neuron can extend your situation count to 124 and every next neuron will double that
Related
I use NN with:
1 input layer (2 neurons),
1 hidden layer (2 neurons),
1 output layer (1 neuron).
I train it using feed forward and backpropagation algorithm. I also initialize weights randomly from range [-1, 1] (also tried [0, 1], but it doesn't change anything actually). And often (like 4/5 times) everything is trained properly (for inputs
[00, 01, 10, 11]
it outputs
[~0.1, ~0.9, ~0.9, ~0.1]
, respectively), but the remaining 1/5 times it outputs like
[~0.5, ~0.6, ~0.4, ~0.1]
(by ~number I mean a value around the number e.g. ~0.1 may be either 0.098 or 0.132 or similiar)
It doesn't matter if I train it for, let's say, 20 seconds or 10 minutes, it's still the same.
I'm pretty sure it's because of the randomly intialized weights, but I'm not sure how to fix that.
How shall I initialize the weights for this problem as well as for others (if it causes the problem)?
How do you do that? Have you any idea what causes this problem or you need some code? Thanks in advance.
Suppose a dataset comprises independent variables that are continuous and binary variables. Usually the label/outcome column is converted to a one hot vector, whereas continuous variables can be normalized. But what needs to be applied for binary variables.
AGE RACE GENDER NEURO EMOT
15.95346 0 0 3 1
14.57084 1 1 0 0
15.8193 1 0 0 0
15.59754 0 1 0 0
How does this apply for logistic regression and neural networks?
If the range of continuous value is small, encode it into a binary form and use each bit of that binary form as a predictor.
For example, number 2 = 10 in binary.
Therefore
predictor_bit_0 = 0
predictor_bit_1 = 1
Try and see if it works. Just to warn you, this method is very subjective and may or may not yield good results for your data. I'll keep you posted if I find a better solution
This is just for fun to see if neural network predictions increase my odds of getting pick 3 lotteries correct.
Right now i just have a simple model of 30 input units, 30 hidden units, and 30 output units.
30 because if the pick 3 result was something like 124, i would make so that all my inputs are 0's except input[1] = 1 (because i assign 0 to 9 for the first digit), input[12] = 1 (because i assign 10 to 19 for the middle digit), input[24] = 1 (because i assign 20 to 29 for the last digit). I just do that so that my inputs are able store placement of digits.
i am training it so that if enter inputs for one draw, it gives me outputs for the next draw.
Do you know of a better model (if you have had experience with neural networks that dealt with pick3 lotteries)?
I'm trying to make an ANN which could tell me if there is causality between my input and output data. Data is following:
My input are measured values of pesticides (19 total) in an area eg:
-1.031413662 -0.156086316 -1.079232918 -0.659174849 -0.734577317 -0.944137546 -0.596917991 -0.282641072 -0.023508282 3.405638835 -1.008434997 -0.102330305 -0.65961995 -0.687140701 -0.167400684 -0.4387984 -0.855708613 -0.775964435 1.283238514
And the output is the measured value of plant-somthing in the same area (55 total) eg:
0.00 0.00 0.00 13.56 0 13.56 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 13.56 0 0 0 1.69 0 0 0 0 0 0 0 0 0 0 1.69 0 0 0 0 13.56 0 0 0 0 13.56 0 0 0 0 0 0
Values for input are in range from -2.5 to 10, and for output from 0 to 100.
So the question I'm trying to answer is: in what measure does pesticide A affect plant-somthings.
What are good ways to model (represent) input/output neurons to be able to process the mentioned input/output data? And how to scale/convert input/output data to be useful for NN?
Is there a book/paper that I should look at?
First, a neural network cannot find the causality between output and input, but only the correlation (just like every other probabilistic methods). Causality can only be derived logically from reasoning (and even then, it's not always clear, it all depends on your axioms).
Secondly, about how to design a neural network to model your data, here is a pretty simple rule that can be generally applied to make a first working draft:
set the number of input neurons = the number of input variables for one sample
set the number of output neurons = the number of output variables for one sample
then play with the number of hidden layers and the number of hidden neurons per hidden layer. In practice, you want to use the fewest number of hidden layers/neurons to model your data correctly, but enough so that the function approximated by your neural network fits correctly the data (else the error in output will be huge compared to the real output dataset).
Why do you need to use just enough neurons but not too much? This is because if you use a lot of hidden neurons, you are sure to overfit your data, and thus you will make a perfect prediction on your training dataset, but not in the general case when you will use real datasets. Theoretically, this is because a neural network is a function approximator, thus it can approximate any function, but using a too high order function will lead to overfitting. See PAC learning for more info on this.
So, in your precise case, the first thing to do is to clarify how many variables you have in input and in output for each sample. If it's 19 in input, then create 19 input nodes, and if you have 55 output variables, then create 55 output neurons.
About scaling and pre-processing, yes you should normalize your data between the range 0 and 1 (or -1 and 1 it's up to you and it depends on the activation function). A very good place to start is to watch the videos at the machine learning course by Andrew Ng at Coursera, this should get you kickstarted quickly and correctly (you'll be taught the tools to check that your neural network is working correctly, and this is immensely important and useful).
Note: you should check your output variables, from the sample you gave it seems they use discrete values: if the values are discrete, then you can use discrete output variables which will be a lot more precise and predictive than using real, floating values (eg, instead of having [0, 1.69, 13.56] as the possible output values, you'll have [0,1,2], this is called "binning" or multi-class categorization). In practice, this means you have to change the way your network works, by using a classification neural network (using activation functions such as sigmoid) instead of a regressive neural network (using activation functions such as logistic regression or rectified linear unit).
I need to implement a Robot Brain, I used feedforward neural network as a Controller. The robot has 24 sonar sonsor, and only one ouput which is R=Right, L=Left, F=Forward, B=Back. I also have a large dataset which contain sonar data and the desired output. The FNN is trained using backpropagation algorithm.
I used neuroph Studio to construct the FNN and to do the trainnig. Here the network params:
Input layer: 24
Hidden Layer: 10
Output Layer: 1
LearnningRate: 0.5
Momentum: 0.7
GlobalError: 0.1
My problem is that during iteration the error drop slightly and seems to be static. I tried to change the parameter but I'm not getting any useful result!!
Thanks for your help
Use 1 of n encoding for the output. Use 4 output neurons, and set up your target (output) data like this:
1 0 0 0 = right
0 1 0 0 = left
0 0 1 0 = forward
0 0 0 1 = back
Reduce the number of input sensors (and corresponding input neurons) to begin with, down to 3 or 5. This will simplify things so you can understand what's going on. Later you can build back up to 24 inputs.
Neural networks often get stuck in local minima during training, that could be why your error is static. Increasing the momentum can help avoid this.
Your learning rate looks quite high. Try 0.1, but play around with these values. Every problem is different and there are no values guaranteed to work.