Artificial neural networks benchmark - neural-network

Are there any benchmarks that can be used to check if implementation of ANN is correct?
I want to have some input and output data, and some information like:
- The output of Feedforward neural network with 3 layers should be correct in 90% of test data.
I need this information to be sure that this kind of ANN is able to deal with such problem.

Probably the best thing you can do is design a neural network that learns the XOR function. Here is a web site that shows sample runs: http://www.generation5.org/content/2001/xornet.asp
I had a homework in which our teacher gave us the first few runs of the neural network with given weights... if you set your neural network with the same weights, then you should get the same results (with straight backpropagation).
If you have a neural network with 1 input layer (with 2 input neurons + 1 constant), 1 hidden layer (with 2 neurons + 1 constant) and 1 output layer and you initialize all your weights to 0.6, and make your constant neurons always return -1, then you should get the exact same results in your first 10 runs:
* Data File: xor.csv
* Number of examples: 4
Number of input units: 2
Number of hidden units: 2
Maximum Epochs: 10
Learning Rate: 0.100000
Error Margin: 0.100000
==== Initial Weights ====
Input (3) --> Hidden (3) :
1 2
0 0.600000 0.600000
1 0.600000 0.600000
2 0.600000 0.600000
Hidden (3) --> Output:
0 0.600000
1 0.600000
2 0.600000
***** Epoch 1 *****
Maximum RMSE: 0.5435466682137927
Average RMSE: 0.4999991292217466
Percent Correct: 0%
Input (3) --> Hidden (3) :
1 2
0 0.599691 0.599691
1 0.599987 0.599987
2 0.599985 0.599985
Hidden (3) --> Output:
0 0.599864
1 0.599712
2 0.599712
***** Epoch 2 *****
Maximum RMSE: 0.5435080531724404
Average RMSE: 0.4999982558452263
Percent Correct: 0%
Input (3) --> Hidden (3) :
1 2
0 0.599382 0.599382
1 0.599973 0.599973
2 0.599970 0.599970
Hidden (3) --> Output:
0 0.599726
1 0.599425
2 0.599425
***** Epoch 3 *****
Maximum RMSE: 0.5434701135827593
Average RMSE: 0.4999973799942081
Percent Correct: 0%
Input (3) --> Hidden (3) :
1 2
0 0.599072 0.599072
1 0.599960 0.599960
2 0.599956 0.599956
Hidden (3) --> Output:
0 0.599587
1 0.599139
2 0.599139
***** Epoch 4 *****
Maximum RMSE: 0.5434328258833577
Average RMSE: 0.49999650178769495
Percent Correct: 0%
Input (3) --> Hidden (3) :
1 2
0 0.598763 0.598763
1 0.599948 0.599948
2 0.599941 0.599941
Hidden (3) --> Output:
0 0.599446
1 0.598854
2 0.598854
***** Epoch 5 *****
Maximum RMSE: 0.5433961673713259
Average RMSE: 0.49999562134010495
Percent Correct: 0%
Input (3) --> Hidden (3) :
1 2
0 0.598454 0.598454
1 0.599936 0.599936
2 0.599927 0.599927
Hidden (3) --> Output:
0 0.599304
1 0.598570
2 0.598570
***** Epoch 6 *****
Maximum RMSE: 0.5433601161709642
Average RMSE: 0.49999473876144657
Percent Correct: 0%
Input (3) --> Hidden (3) :
1 2
0 0.598144 0.598144
1 0.599924 0.599924
2 0.599914 0.599914
Hidden (3) --> Output:
0 0.599161
1 0.598287
2 0.598287
***** Epoch 7 *****
Maximum RMSE: 0.5433246512036478
Average RMSE: 0.49999385415748615
Percent Correct: 0%
Input (3) --> Hidden (3) :
1 2
0 0.597835 0.597835
1 0.599912 0.599912
2 0.599900 0.599900
Hidden (3) --> Output:
0 0.599017
1 0.598005
2 0.598005
***** Epoch 8 *****
Maximum RMSE: 0.5432897521587884
Average RMSE: 0.49999296762990975
Percent Correct: 0%
Input (3) --> Hidden (3) :
1 2
0 0.597526 0.597526
1 0.599901 0.599901
2 0.599887 0.599887
Hidden (3) --> Output:
0 0.598872
1 0.597723
2 0.597723
***** Epoch 9 *****
Maximum RMSE: 0.5432553994658493
Average RMSE: 0.49999207927647754
Percent Correct: 0%
Input (3) --> Hidden (3) :
1 2
0 0.597216 0.597216
1 0.599889 0.599889
2 0.599874 0.599874
Hidden (3) --> Output:
0 0.598726
1 0.597443
2 0.597443
***** Epoch 10 *****
Maximum RMSE: 0.5432215742673802
Average RMSE: 0.4999911891911738
Percent Correct: 0%
Input (3) --> Hidden (3) :
1 2
0 0.596907 0.596907
1 0.599879 0.599879
2 0.599862 0.599862
Hidden (3) --> Output:
0 0.598579
1 0.597163
2 0.597163
Input (3) --> Hidden (3) :
1 2
0 0.596907 0.596907
1 0.599879 0.599879
2 0.599862 0.599862
Hidden (3) --> Output:
0 0.598579
1 0.597163
2 0.597163
xor.csv contains the following data:
0.000000,0.000000,0
0.000000,1.000000,1
1.000000,0.000000,1
1.000000,1.000000,0
Your neural network should look like this (disregard the weights, yellow is the constant input neuron):
(source: jtang.org)

You can use the MNIST database of handwritten digits, with a 60k training and a 10k test set, to compare the error rate of your implementation against various other machine learning algorithms like K-NN, SVM, Convolutional networks (Deep learning) and of course different ANN configurations.

Related

Use neural netword to fit a reduced boolean function, but found the super-params not as expected

This is the boolean function I try to fit.
[boolean function description][1]
Theoretically we need a neural network with 1 hidden layer which has 3 neurons at least.
And that's actually how I built the neural network in Pytorch.
However, despite the prediction of NN usually correct, the parameters (I mean the weights and bias) not as expected.
I expect the parameters to be like this way(A perceptron operation is equivalent to a Boolean gate):
[perceptron equivalent to a boolean gate][2]
Here the key code:
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.linear_relu_stack = nn.Sequential(
nn.Linear(4, 3),
nn.ReLU(),
nn.Linear(3, 1),
)
def forward(self, x):
logits = self.linear_relu_stack(x)
return logits
base_lr = 0.001
optimizer = torch.optim.Adam(model.parameters(), base_lr)
criterion = nn.MSELoss().to(device)
Here is the key output:
First, the prediction not so bad, The categories are correct, but some of the numbers are not precise enough
w x y z pred
0 0 0 0 0 [tensor(0.9992, grad_fn=<UnbindBackward0>)]
1 0 0 0 1 [tensor(0.2459, grad_fn=<UnbindBackward0>)]
2 0 0 1 0 [tensor(0.9992, grad_fn=<UnbindBackward0>)]
3 0 0 1 1 [tensor(0.0040, grad_fn=<UnbindBackward0>)]
4 0 1 0 0 [tensor(0.9992, grad_fn=<UnbindBackward0>)]
5 0 1 0 1 [tensor(0.7707, grad_fn=<UnbindBackward0>)]
6 0 1 1 0 [tensor(-0.0015, grad_fn=<UnbindBackward0>)]
7 0 1 1 1 [tensor(-0.0025, grad_fn=<UnbindBackward0>)]
8 1 0 0 0 [tensor(0.9992, grad_fn=<UnbindBackward0>)]
9 1 0 0 1 [tensor(-0.2525, grad_fn=<UnbindBackward0>)]
10 1 0 1 0 [tensor(0.9992, grad_fn=<UnbindBackward0>)]
11 1 0 1 1 [tensor(-0.0077, grad_fn=<UnbindBackward0>)]
12 1 1 0 0 [tensor(0.9992, grad_fn=<UnbindBackward0>)]
13 1 1 0 1 [tensor(0.2722, grad_fn=<UnbindBackward0>)]
14 1 1 1 0 [tensor(-0.0066, grad_fn=<UnbindBackward0>)]
15 1 1 1 1 [tensor(0.0033, grad_fn=<UnbindBackward0>)]
Second, the parameters are not as expected.
linear_relu_stack.0.weight tensor([[-0.3637, 0.3838, 0.7624, 0.3661],
[ 0.2857, 0.5719, 0.5721, -0.5846],
[ 0.4782, -0.5035, -0.2349, 1.2070]])
linear_relu_stack.0.bias tensor([-0.7657, -0.8599, -0.4842])
linear_relu_stack.2.weight tensor([[-1.3418, -1.7255, -1.0422]])
linear_relu_stack.2.bias tensor([0.9992])
My question is why the NN doesn't convege to my expected position?
What' the problem?
[1]: https://i.stack.imgur.com/WqaXi.png
[2]: https://i.stack.imgur.com/Z9cQb.png
Generally you can't expect a neural network to solve a problem the same way you do.
Firstly, the network starts from a random state and proceeds by optimizing that state (e.g. you have used Adam in your code) which means there is no guarantee of what state will the code eventually land in.
Secondly, if your results are [as you stated in your question] more or less correct, then your network has found a reduction for your function, it's just that this reduction might not make sense to you, specially if you're trying to understand it in terms of logical functions which is not very close to how Neural Networks works.
Not being interpretable is a known down-side of Neural Networks and this is case is no exception.

which column in matlab selforgmap output corresponds to which neuron of the SOM map

I used selforgmap for pattern recognition. After training finished i calculated the network's output of the whole data and I got a logical matrix.
I want know how selforgmap:
1- numbers the neurons (i mean from 1 to N, while N equals the total number of neurons)
2-
Here is my map
1 O------O
/ /
0 O------O
0 0.5 1 1.5
the output looks like this (after transpose)
1 0 0 0
0 1 0 0
1 0 0 0
1 0 0 0
0 0 1 0
0 1 0 0
0 0 1 0
0 0 1 0
i want know which column in output corresponds to which neuron of the map
Selforgmap in MATLAB starts the numbering from the bottom left. For your example, the neurons are labeled:
3 - 4
/ /
1 2
You can use the
vec2ind(output)
command to associate the output with the neuron to which the corresponding input has been assigned.

Creating a perceptron network when input array is large

I need to create a perceptron which has two target values(0,1) and 21 input vectors. Each vector is 110592 in size. How do I call the "newp" function for this?
p=[I1,I2,I3,P1,P2,P3,Q1,Q2,Q3,R1,R2,R3,Z1,Z2,Z3,A1,A2,A3,B1,B2,B3];
t=[0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1];
each vector I1,I2. etc are of size 1*110592.
if the perceptron has only two inputs, i can call the function as below.
newp([0 1;0 2],1); // which says 1st input is of range 0 to 1 and 2nd input is of range 0 2
SO the problem is how do i format my first argument when i have such a large vector input?

Matlab: command for counting occurrences in ascending order not cumulatively?

This must be asked before but I cannot find it now. It calculates the amount of zeros, add the count of zeros to vector, then calculate the amount of ones, append the count of ones to the vector and so on. If zero count, make it as zero.
Is there some zero command to do this counting in Matlab?
Input ---> Output
0 1 1 1 2 3 3 4 7 ---> [1,3,1,2,1,0,0,1]
0 1 1 1 ---> 1 3
2 7 ----> 0 0 1 0 0 0 0 1
To get the total count of occurrences of each number, use histc:
x = [0 1 1 1 2 3 3 4 7]; %// example data
histc(x, 0:max(x))

calculate co-occurrences

I have a file as shown in the screenshot attached. There are 61 events(peaks) and I want to find how often each peak occurs with the other (co-occurrence) for all possible combinations. The file has the frequency (number of times the peak appears in the 47 samples) and probability (no.of times the peak occurs divided by total number of samples).
Then I want to find mutually exclusive peaks using the formula p(x,y) / p(x)*p(y), where p(x,y) is the probability that x and y co-occur, p(x) is probability of peak (x) and p(y) is the probability of peak y.
What is the best way to solve such a problem? Do I need to write a Perl script or are there some R functions I could use? I am a biologist trying to learn Perl and R so I would appreciate some example code to solve this problem.
In the following, I've assumed that what you alternately call p(xy) and p(x,y) should actually be the probability (rather than the number of times) that x and y co-occur. If that's not correct, just remove the division by nrow(X) from the 2nd line below.
# As an example, create a sub-matrix of your data
X <- cbind(c(0,0,0,0,0,0), c(1,0,0,1,1,1), c(1,1,0,0,0,0))
num <- (t(X) %*% X)/nrow(X) # The numerator of your expression
means <- colMeans(X) # A vector of means of each column
denom <- outer(colMeans(X), colMeans(X)) # The denominator
out <- num/denom
# [,1] [,2] [,3]
# [1,] NaN NaN NaN
# [2,] NaN 1.50 0.75
# [3,] NaN 0.75 3.00
Note: The NaNs in the results are R's way of indicating that those cells are "Not a number" (since they are each the result of dividing 0 by 0).
your question is not completely clear without a proper example but I am thinking this result is along the lines of what you want i.e. "I want to find how often each peak occurs with the other (co-occurrence) "
library(igraph)
library(tnet)
library(bipartite)
#if you load your data in as a matrix e.g.
mat<-matrix(c(1,1,0,2,2,2,3,3,3,4,4,0),nrow=4,byrow=TRUE) # e.g.
# [,1] [,2] [,3] # your top line as columns e.g.81_05 131_00 and peaks as rows
#[1,] 1 1 0
#[2,] 2 2 2
#[3,] 3 3 3
#[4,] 4 4 0
then
pairs<-web2edges(mat,return=TRUE)
pairs<- as.tnet(pairs,type="weighted two-mode tnet")
peaktopeak<-projecting_tm(pairs, method="sum")
peaktopeak
#peaktopeak
# i j w
#1 1 2 2 # top row here says peak1 and peak2 occurred together twice
#2 1 3 2
#3 1 4 2
#4 2 1 4
#5 2 3 6
#6 2 4 4
#7 3 1 6
#8 3 2 9
#9 3 4 6
#10 4 1 8
#11 4 2 8
#12 4 3 8 # peak4 occured with peak3 8 times
EDIT: If mutually exclusive peaks that do not occur are just those that do not share 1s in the same columns as your original data then you can just see this in peaktopeak. For instance if peak 1 and 3 never occur they wont be found in peaktopeak in the same row.
To look at this easier you could:
peakmat <- tnet_igraph(peaktopeak,type="weighted one-mode tnet")
peakmat<-get.adjacency(peakmat,attr="weight")
e.g.:
# [,1] [,2] [,3] [,4]
#[1,] 0 2 2 2
#[2,] 4 0 6 4
#[3,] 6 9 0 6
#[4,] 8 8 8 0 # zeros would represent peaks that never co occur.
#In this case everything shares at least 2 co-occurrences
#diagonals are 0 as saying peak1 occurs with itself is obviously silly.