I'm trying to make an ANN which could tell me if there is causality between my input and output data. Data is following:
My input are measured values of pesticides (19 total) in an area eg:
-1.031413662 -0.156086316 -1.079232918 -0.659174849 -0.734577317 -0.944137546 -0.596917991 -0.282641072 -0.023508282 3.405638835 -1.008434997 -0.102330305 -0.65961995 -0.687140701 -0.167400684 -0.4387984 -0.855708613 -0.775964435 1.283238514
And the output is the measured value of plant-somthing in the same area (55 total) eg:
0.00 0.00 0.00 13.56 0 13.56 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 13.56 0 0 0 1.69 0 0 0 0 0 0 0 0 0 0 1.69 0 0 0 0 13.56 0 0 0 0 13.56 0 0 0 0 0 0
Values for input are in range from -2.5 to 10, and for output from 0 to 100.
So the question I'm trying to answer is: in what measure does pesticide A affect plant-somthings.
What are good ways to model (represent) input/output neurons to be able to process the mentioned input/output data? And how to scale/convert input/output data to be useful for NN?
Is there a book/paper that I should look at?
First, a neural network cannot find the causality between output and input, but only the correlation (just like every other probabilistic methods). Causality can only be derived logically from reasoning (and even then, it's not always clear, it all depends on your axioms).
Secondly, about how to design a neural network to model your data, here is a pretty simple rule that can be generally applied to make a first working draft:
set the number of input neurons = the number of input variables for one sample
set the number of output neurons = the number of output variables for one sample
then play with the number of hidden layers and the number of hidden neurons per hidden layer. In practice, you want to use the fewest number of hidden layers/neurons to model your data correctly, but enough so that the function approximated by your neural network fits correctly the data (else the error in output will be huge compared to the real output dataset).
Why do you need to use just enough neurons but not too much? This is because if you use a lot of hidden neurons, you are sure to overfit your data, and thus you will make a perfect prediction on your training dataset, but not in the general case when you will use real datasets. Theoretically, this is because a neural network is a function approximator, thus it can approximate any function, but using a too high order function will lead to overfitting. See PAC learning for more info on this.
So, in your precise case, the first thing to do is to clarify how many variables you have in input and in output for each sample. If it's 19 in input, then create 19 input nodes, and if you have 55 output variables, then create 55 output neurons.
About scaling and pre-processing, yes you should normalize your data between the range 0 and 1 (or -1 and 1 it's up to you and it depends on the activation function). A very good place to start is to watch the videos at the machine learning course by Andrew Ng at Coursera, this should get you kickstarted quickly and correctly (you'll be taught the tools to check that your neural network is working correctly, and this is immensely important and useful).
Note: you should check your output variables, from the sample you gave it seems they use discrete values: if the values are discrete, then you can use discrete output variables which will be a lot more precise and predictive than using real, floating values (eg, instead of having [0, 1.69, 13.56] as the possible output values, you'll have [0,1,2], this is called "binning" or multi-class categorization). In practice, this means you have to change the way your network works, by using a classification neural network (using activation functions such as sigmoid) instead of a regressive neural network (using activation functions such as logistic regression or rectified linear unit).
Here is an example of a subset of the matrix I would like to use:
1 3 5
2 3 6
1 1 1
3 5 4
5 5 5
8 8 0
This matrix is in fact 3000 x 3.
For the first 3 rows, I wish to subtract each of these rows with the first row of these three.
For the second 3 rows, I wish to subtract each of these rows with the first of these three, and so on.
As such, the output matrix will look like:
0 0 0
1 0 1
0 -2 -4
0 0 0
2 0 1
5 3 -4
What code in MATLAB will do this for me?
You could also do this completely vectorized by using mat2cell, cellfun, then cell2mat. Assuming our matrix is stored in A, try:
numBlocks = size(A,1) / 3;
B = mat2cell(A, 3*ones(1,numBlocks), 3);
C = cellfun(#(x) x - x([1 1 1], :), B, 'UniformOutput', false);
D = cell2mat(C); %//Output
The first line figures out how many 3 x 3 blocks we need. This is assuming that the number of rows is a multiple of 3. The second line uses mat2cell to decompose each 3 x 3 block and places them into individual cells. The third line then uses cellfun so that for each cell in our cell array (which is a 3 x 3 matrix), it takes each row of the 3 x 3 matrix and subtracts itself with the first row. This is very much like what #David did, except I didn't use repmat to minimize overhead. The fourth line then takes each of these matrices and stacks them back so that we get our final matrix in the end.
Example (this is using the matrix that was defined in your post):
A = [1 3 5; 2 3 6; 1 1 1; 3 5 4; 5 5 5; 8 8 0];
numBlocks = size(A,1) / 3;
B = mat2cell(A, 3*ones(1, numBlocks), 3);
C = cellfun(#(x) x - x([1 1 1], :), B, 'UniformOutput', false);
D = cell2mat(C);
Output:
D =
0 0 0
1 0 1
0 -2 -4
0 0 0
2 0 1
5 3 -4
In hindsight, I think #David is right with respect to performance gains. Unless this code is repeated many times, I think the for loop will be more efficient. Either way, I wanted to provide another alternative. Cool exercise!
Edit: Timing and Size Tests
Because of our discussion earlier, I have decided to do timing and size tests. These tests were performed on an Intel i7-4770 # 3.40 GHz CPU with 16 GB of RAM, using MATLAB R2014a on Windows 7 Ultimate. Basically, I did the following:
Test #1 - Set the random seed generator to 1 for reproducibility. I wrote a loop that cycled 10000 times. For each iteration in the loop, I generated a random integer 3000 x 3 matrix, then performed each of the methods that were described here. I took note of how long it took for each method to complete after 10000 cycles. The timing results are:
David's method: 0.092129 seconds
rayryeng's method: 1.9828 seconds
natan's method: 0.20097 seconds
natan's bsxfun method: 0.10972 seconds
Divakar's bsxfun method: 0.0689 seconds
As such, Divakar's method is the fastest, followed by David's for loop method, followed closely by natan's bsxfun method, followed by natan's original kron method, followed by the sloth (a.k.a mine).
Test #2 - I decided to see how fast this would get as you increase the size of the matrix. The set up was as follows. I did 1000 iterations, and at each iteration, I increase the size of the matrix rows by 3000 each time. As such, iteration 1 consisted of a 3000 x 3 matrix, the next iteration consisted of a 6000 x 3 matrix and so on. The random seed was set to 1 again. At each iteration, the time taken to complete the code was taken a note of. To ensure fairness, the variables were cleared at each iteration before the processing code began. As such, here is a stem plot that shows you the timing for each size of matrix. I subsetted the plot so that it displays timings from 200000 x 3 to 300000 x 3. Take note that the horizontal axis records the number of rows at each iteration. The first stem is for 3000 rows, the next is for 6000 rows and so on. The columns remain the same at 3 (of course).
I can't explain the random spikes throughout the graph.... probably attributed to something happening in RAM. However, I'm very sure I'm clearing the variables at each iteration to ensure no bias. In any case, Divakar and David are closely tied. Next comes natan's bsxfun method, then natan's kron method, followed last by mine. Interesting to see how Divakar's bsxfun method and David's for method are side-by-side in timing.
Test #3 - I repeated what I did for Test #2, but using natan's suggestion, I decided to go on a logarithmic scale. I did 6 iterations, starting at a 3000 x 3 matrix, and increasing the rows by 10 fold after. As such, the second iteration had 30000 x 3, the third iteration had 300000 x 3 and so on, up until the last iteration, which is 3e8 x 3.
I have plotted on a semi-logarithmic scale on the horizontal axis, while the vertical axis is still a linear scale. Again, the horizontal axis describes the number of rows in the matrix.
I changed the vertical limits so we can see most of the methods. My method is so poor performing that it would squash the other timings towards the lower end of the graph. As such, I changed the viewing limits to take my method out of the picture. Essentially what was seen in Test #2 is verified here.
Here's another way to implement this with bsxfun, slightly different from natan's bsxfun implementation -
t1 = reshape(a,3,[]); %// a is the input matrix
out = reshape(bsxfun(#minus,t1,t1(1,:)),[],3); %// Desired output
a slightly shorter and vectorized way will be (if a is your matrix) :
b=a-kron(a(1:3:end,:),ones(3,1));
let's test:
a=[1 3 5
2 3 6
1 1 1
3 5 4
5 5 5
8 8 0]
a-kron(a(1:3:end,:),ones(3,1))
ans =
0 0 0
1 0 1
0 -2 -4
0 0 0
2 0 1
5 3 -4
Edit
Here's a bsxfun solution (less elegant, but hopefully faster):
a-reshape(bsxfun(#times,ones(1,3),permute(a(1:3:end,:),[2 3 1])),3,[])'
ans =
0 0 0
1 0 1
0 -2 -4
0 0 0
2 0 1
5 3 -4
Edit 2
Ok, this got me curios, as I know bsxfun starts to be less efficient for bigger array sizes. So I tried to check using timeit my two solutions (because they are one liners it's easy). And here it is:
range=3*round(logspace(1,6,200));
for n=1:numel(range)
a=rand(range(n),3);
f=#()a-kron(a(1:3:end,:),ones(3,1));
g=#() a-reshape(bsxfun(#times,ones(1,3),permute(a(1:3:end,:),[2 3 1])),3,[])';
t1(n)=timeit(f);
t2(n)=timeit(g);
end
semilogx(range,t1./t2);
So I didn't test for the for loop and Divkar's bsxfun, but you can see that for arrays smaller than 3e4 kron is better than bsxfun, and this changes at larger arrays (ratio of <1 means kron took less time given the size of the array). This was done at Matlab 2012a win7 (i5 machine)
Simple for loop. This does each 3x3 block separately.
A=randi(5,9,3)
B=A(1:3:end,:)
for i=1:length(A(:,1))/3
D(3*i-2:3*i,:)=A(3*i-2:3*i,:)-repmat(B(i,:),3,1)
end
D
Whilst it may be possible to vectorise this, I don't think the performance gains would be worth it, unless you will do this many times. For a 3000x3 matrix it doesn't take long at all.
Edit: In fact this seems to be pretty fast. I think that's because Matlab's JIT compilation can speed up simple for loops well.
You can do it using just indexing:
a(:) = a(:) - a(3*floor((0:numel(a)-1)/3)+1).';
Of course, the 3 above can be replaced by any other number. It works even if that number doesn't divide the number of rows.
I'm having an hard time setting up a neural network to classify Tic-Tac-Toe board states (final or intermediate) as "X wins", "O wins" or "Tie".
I will describe my current solution and results. Any advice is appreciated.
* DATA SET *
Dataset = 958 possible end-games + 958 random-games = 1916 board states
(random-games might be incomplete but are all legal. i.e. do not have both players winning simultaneously).
Training set = 1600 random sample of Dataset
Test set = remaining 316 cases
In my current pseudo-random development scenario the dataset has the following characteristics.
Training set:
- 527 wins for "X"
- 264 wins for "O"
- 809 ties
Test set:
- 104 wins for "X"
- 56 wins for "O"
- 156 ties
* Modulation *
Input Layer: 18 input neurons where each one corresponds to a board position and player. Therefore,
the board (B=blank):
x x o
o x B
B o X
is encoded as:
1 0 1 0 0 1 0 1 1 0 0 0 0 0 0 1 1 0
Output Layer: 3 output neurons which correspond to each outcome (X wins, O wins, Tie).
* Architecture *
Based on: http://www.cs.toronto.edu/~hinton/csc321/matlab/assignment2.tar.gz
1 Single Hidden Layer
Hidden Layer activation function: Logistic
Output Layer activation function: Softmax
Error function: Cross-Entropy
* Results *
No combination of parameters seems to achieve 100% correct classification rate. Some examples:
NHidden LRate InitW MaxEpoch Epochs FMom Errors TestErrors
8 0,0025 0,01 10000 4500 0,8 0 7
16 0,0025 0,01 10000 2800 0,8 0 5
16 0,0025 0,1 5000 1000 0,8 0 4
16 0,0025 0,5 5000 5000 0,8 3 5
16 0,0025 0,25 5000 1000 0,8 0 5
16 0,005 0,25 5000 1000 0,9 10 5
16 0,005 0,25 5000 5000 0,8 15 5
16 0,0025 0,25 5000 1000 0,8 0 5
32 0,0025 0,25 5000 1500 0,8 0 5
32 0,0025 0,5 5000 600 0,9 0 5
8 0,0025 0,25 5000 3500 0,8 0 5
Important - If you think I could improve any of the following:
- The dataset characteristics (source and quantities of training and test cases) aren't the best.
- An alternative problem modulation is more suitable (encoding of input/output neurons)
- Better network architecture (Number of Hidden Layers, activation/error functions, etc.).
Assuming that my current options in this regard, even if not optimal, should not prevent the system from having a 100% correct classification rate, I would like to focus on other possible issues.
In other words, considering the simplicity of the game, this dataset/modulation/architecture should do it, therefore, what am I doing wrong regarding the parameters?
I do not have much experience with ANN and my main question is the following:
Using 16 Hidden Neurons, the ANN could learn to associate each Hidden Unit with "a certain player winning in a certain way"
(3 different rows + 3 different columns + 2 diagonals) * 2 players
In this setting, an "optimal" set of weights is pretty straightforward: Each hidden unit has "greater" connection weights from 3 of the input units (corresponding to a row, columns or diagonal of a player) and a "greater" connection weight to one of the output units (corresponding to "a win" of that player).
No matter what I do, I cannot decrease the number of test errors, as the above table shows.
Any advice is appreciated.
You are doing everything right, but you're simply trying to tackle a difficult problem here, namely to generalize from some examples of tic-tac-toe configurations to all others.
Unfortunately, the simple neural network you use does not perceive the spatial structure of the input (neighborhood) nor can it exploit the symmetries. So in order to get perfect test error, you can either:
increase the size of the dataset to include most (or all) possible configurations -- which the network will then be able to simply memorize, as indicated by the zero training error in most of your setups;
choose a different problem, where there is more structure to generalize from;
use a network architecture that can capture symmetry (e.g. through weight-sharing) and/or spatial relations of the inputs (e.g. different features). Convolutional networks are just one example of this.
Devise a perceptron, achieve a function that judges parity of 1 2 3 4 ....using matlab I have train a neural network, but it has very large variance.
I want to ask how to express sample?
If I directly use 1 2 3 4 5...as sample, the variance is very large. In the other words, the neural network is not used to classfiy.
I want to ask if the other functions can be used to transform the sample?
This is the program:
P= [0 1 2 3 4 5 6 7 8 9 10 11 12 13 14]; % Question if I can use another form to express the sample
T= [1 0 1 0 1 0 1 0 1 0 1 0 1 0 1];
net=newp([-1 10],1);
net.trainParam.epochs = 40;
net=train(net,P,T);
Y=sim(net,P)
E1=mae(Y-T)
plotpv(P,Y);
plotpc(net.iw{1},net.b{1})
I'm not sure I fully understand your question... but, I thought it worth mentioning that the (single layer) perceptron is (famously) unable to compute exclusive-or (XOR). The binary logic XOR function is equivalent to calculating parity for two bits. For this reason, while I'm not familiar with that specific Matlab package, I expect you to need a multi-layer perceptron... perhaps one with more layers than bits in the input string. If you were to use a perceptron model with too few layers - I'd expect you to fail to train it to calculate parity bits.
Calculating parity is not a task to which the perceptron is ideally suited. :)