devise a perceptron to jugde parity such as 1 2 3 4 - matlab

Devise a perceptron, achieve a function that judges parity of 1 2 3 4 ....using matlab I have train a neural network, but it has very large variance.
I want to ask how to express sample?
If I directly use 1 2 3 4 5...as sample, the variance is very large. In the other words, the neural network is not used to classfiy.
I want to ask if the other functions can be used to transform the sample?
This is the program:
P= [0 1 2 3 4 5 6 7 8 9 10 11 12 13 14]; % Question if I can use another form to express the sample
T= [1 0 1 0 1 0 1 0 1 0 1 0 1 0 1];
net=newp([-1 10],1);
net.trainParam.epochs = 40;
net=train(net,P,T);
Y=sim(net,P)
E1=mae(Y-T)
plotpv(P,Y);
plotpc(net.iw{1},net.b{1})

I'm not sure I fully understand your question... but, I thought it worth mentioning that the (single layer) perceptron is (famously) unable to compute exclusive-or (XOR). The binary logic XOR function is equivalent to calculating parity for two bits. For this reason, while I'm not familiar with that specific Matlab package, I expect you to need a multi-layer perceptron... perhaps one with more layers than bits in the input string. If you were to use a perceptron model with too few layers - I'd expect you to fail to train it to calculate parity bits.
Calculating parity is not a task to which the perceptron is ideally suited. :)

Related

What is the difference between predict and svmclassify?

I tried the following code
data = [27 9 0
11.6723281 28.93422177 0
25 9 0
23 8 0
5.896096039 23.97745722 1
21 6 0
21.16823369 5.292058423 0
4.242640687 13.43502884 1
22 6 0];
Attributes = data(:,1:2);
Classes = data(:,3);
train = [1 3 4 5 6 7];
test = [2 8 9];
%%# Train
SVMModel = fitcsvm(Classes(train),Attributes(train,:))
classOrder = SVMModel.ClassNames
sv = SVMModel.SupportVectors;
figure
gscatter(train(:,1),train(:,2),Classes)
hold on
plot(train(:,1),train(:,2),'ko','MarkerSize',10)
legend('good','bad','Support Vector')
hold off
I tried both predict and svmclassify; but it returns an error. What is the basic difference between these two functions?
[label,score] = predict(SVMModel,test);
label = svmclassify(SVMModel, test);
First off, there's quite a big note on top of the documentation page on svmclassify:
svmclassify will be removed in a future release. See fitcsvm, ClassificationSVM, and CompactClassificationSVM instead.
MATLAB is a bit vague in its naming of functions, as there's loads of functions named predict, using different schemes and algorithms. I suspect you'll want to use the one for SVMs. This should return the same result as svmclassify, but I think that either something went wrong in determining which predict MATLAB decided to use, or that predict has a newer algorithm than the unsupported svmclassify, hence a different output may result.
The conclusion is that you should use the newest functions to be able to run your code in future releases and get the newest algorithms. MATLAB will choose the correct version of predict based on what kind of input structure you feed it.

why should i transpose in neural network in matlab?

I would like to ask a question about matlab transpose symbol. For example in this case:
input=input';
It makes transpose of input but i want to learn why we should use transpose via usin Artificial Neural Network in matlab?
Second Question is:
I am trying to create a classification using ANN in matlab. I showed results like that:
a=sim(neuralnetworkname,test)
test is represens my test data in Neural network.
and the results is like that:
a =
Columns 1 through 12
2.0374 3.9589 3.2162 2.0771 2.0931 3.9947 3.1718 3.9813 2.1528 3.9995 3.8968 3.9808
Columns 13 through 20
3.9996 3.7478 2.1088 3.9932 2.0966 2.0644 2.0377 2.0653
If the result of a is about 2, it would benign, if the result of a is about 4,it is malignant.
So, I want to calculate that :for example,there are 100 benign in 500 data.(100/500) How can i write screen this 100/500
I tried to be clear, but if i didn't clear enough, I can try to explain more.Thanks.
First Question
You don't need to transpose input values everytime. Matlab nntool normally gets input values column by column by default. So you have two choice: 1. Change dataset order 2. Transpose input
Second Question
Suppose you have matrix like this:
a=[1 2 3 4 5 6 7 8 9 0 0 0];
To count how many elements below 8, write this:
sum(a<8) %[1 2 3 4 5 6 7 0 0 0]
Output will be:
10

Matlab: I/O Delay detection

I have a continuous process with 3 inputs and 1 output. The 3 inputs are consecutive in time: Input 1 lags the output by 30 minutes, Input 2 by 15 etc.
My dataset below shows a startup for the system after a shutdown:
I1 I2 I3 Out
0 0 0 0
3 0 0 0
8 4 0 0
13 8 6 0
22 13 9 3.2
It can be seen how input1 started and everything else followed.
My question: in Matlab, what should I look for in order to determine such I/O delay for more complex datasets?
You should pay a close look to xcorr
xcorr performs a cross-correlation between two vectors (typically time signals) and checks their conformity in dependence of a time shift between the signals. A constant I/O lag should appear as a local maximum value for the correlation coefficient.

How to find values of the input data in plotsommaphits

I have used SOM tool box in MATLAB or iris data set. following example and using the plotsomhits i can see that how many data values are in each neuron of my neuron grid of the SOM . However I wish to know actual data values which are grouped in every neuron of the given SOM configuration .Is there any way to do it. this is the example I used.
net = selforgmap([8 8]);
view(net)
[net,tr] = train(net,x);
nntraintool
plotsomhits(net,x)
not that hard. plotsomhits just plots "fancily" the results of the simulation of the net.
so if you simulate it and add the "hits" you have the result!
basicaly:
hits=sum(sim(net,x)');
In your net case this was my results, that coincide with the numbers in the plotsomehits
hits= 6 5 0 7 15 7 4 0 8 20 3 3 9 3 0 8 6 3 11 4 5 5 7 10 1
PD: you can learn a lot in this amazing SO answer:
MATLAB: help needed with Self-Organizing Map (SOM) clustering
You need to convert vector to indices first and then you can see what input values a neuron correspond to.
>> input_neuron_mapping = vec2ind(net(x))';
Now, look into the neuron's inputs.
For example, you want to see neuron input values for neuron 2.
>> neuron_2_input_indices = find(input_neuron_mapping == 2)
>> neuron_2_input_values = x(neuron_2_input_indices)
It will display all the input values from your data.
Read here for more details:
https://bioinformaticsreview.com/20220603/how-to-get-input-values-from-som-sample-hits-plot-in-matlab/

MATLAB: Subtracting matrix subsets by specific rows

Here is an example of a subset of the matrix I would like to use:
1 3 5
2 3 6
1 1 1
3 5 4
5 5 5
8 8 0
This matrix is in fact 3000 x 3.
For the first 3 rows, I wish to subtract each of these rows with the first row of these three.
For the second 3 rows, I wish to subtract each of these rows with the first of these three, and so on.
As such, the output matrix will look like:
0 0 0
1 0 1
0 -2 -4
0 0 0
2 0 1
5 3 -4
What code in MATLAB will do this for me?
You could also do this completely vectorized by using mat2cell, cellfun, then cell2mat. Assuming our matrix is stored in A, try:
numBlocks = size(A,1) / 3;
B = mat2cell(A, 3*ones(1,numBlocks), 3);
C = cellfun(#(x) x - x([1 1 1], :), B, 'UniformOutput', false);
D = cell2mat(C); %//Output
The first line figures out how many 3 x 3 blocks we need. This is assuming that the number of rows is a multiple of 3. The second line uses mat2cell to decompose each 3 x 3 block and places them into individual cells. The third line then uses cellfun so that for each cell in our cell array (which is a 3 x 3 matrix), it takes each row of the 3 x 3 matrix and subtracts itself with the first row. This is very much like what #David did, except I didn't use repmat to minimize overhead. The fourth line then takes each of these matrices and stacks them back so that we get our final matrix in the end.
Example (this is using the matrix that was defined in your post):
A = [1 3 5; 2 3 6; 1 1 1; 3 5 4; 5 5 5; 8 8 0];
numBlocks = size(A,1) / 3;
B = mat2cell(A, 3*ones(1, numBlocks), 3);
C = cellfun(#(x) x - x([1 1 1], :), B, 'UniformOutput', false);
D = cell2mat(C);
Output:
D =
0 0 0
1 0 1
0 -2 -4
0 0 0
2 0 1
5 3 -4
In hindsight, I think #David is right with respect to performance gains. Unless this code is repeated many times, I think the for loop will be more efficient. Either way, I wanted to provide another alternative. Cool exercise!
Edit: Timing and Size Tests
Because of our discussion earlier, I have decided to do timing and size tests. These tests were performed on an Intel i7-4770 # 3.40 GHz CPU with 16 GB of RAM, using MATLAB R2014a on Windows 7 Ultimate. Basically, I did the following:
Test #1 - Set the random seed generator to 1 for reproducibility. I wrote a loop that cycled 10000 times. For each iteration in the loop, I generated a random integer 3000 x 3 matrix, then performed each of the methods that were described here. I took note of how long it took for each method to complete after 10000 cycles. The timing results are:
David's method: 0.092129 seconds
rayryeng's method: 1.9828 seconds
natan's method: 0.20097 seconds
natan's bsxfun method: 0.10972 seconds
Divakar's bsxfun method: 0.0689 seconds
As such, Divakar's method is the fastest, followed by David's for loop method, followed closely by natan's bsxfun method, followed by natan's original kron method, followed by the sloth (a.k.a mine).
Test #2 - I decided to see how fast this would get as you increase the size of the matrix. The set up was as follows. I did 1000 iterations, and at each iteration, I increase the size of the matrix rows by 3000 each time. As such, iteration 1 consisted of a 3000 x 3 matrix, the next iteration consisted of a 6000 x 3 matrix and so on. The random seed was set to 1 again. At each iteration, the time taken to complete the code was taken a note of. To ensure fairness, the variables were cleared at each iteration before the processing code began. As such, here is a stem plot that shows you the timing for each size of matrix. I subsetted the plot so that it displays timings from 200000 x 3 to 300000 x 3. Take note that the horizontal axis records the number of rows at each iteration. The first stem is for 3000 rows, the next is for 6000 rows and so on. The columns remain the same at 3 (of course).
I can't explain the random spikes throughout the graph.... probably attributed to something happening in RAM. However, I'm very sure I'm clearing the variables at each iteration to ensure no bias. In any case, Divakar and David are closely tied. Next comes natan's bsxfun method, then natan's kron method, followed last by mine. Interesting to see how Divakar's bsxfun method and David's for method are side-by-side in timing.
Test #3 - I repeated what I did for Test #2, but using natan's suggestion, I decided to go on a logarithmic scale. I did 6 iterations, starting at a 3000 x 3 matrix, and increasing the rows by 10 fold after. As such, the second iteration had 30000 x 3, the third iteration had 300000 x 3 and so on, up until the last iteration, which is 3e8 x 3.
I have plotted on a semi-logarithmic scale on the horizontal axis, while the vertical axis is still a linear scale. Again, the horizontal axis describes the number of rows in the matrix.
I changed the vertical limits so we can see most of the methods. My method is so poor performing that it would squash the other timings towards the lower end of the graph. As such, I changed the viewing limits to take my method out of the picture. Essentially what was seen in Test #2 is verified here.
Here's another way to implement this with bsxfun, slightly different from natan's bsxfun implementation -
t1 = reshape(a,3,[]); %// a is the input matrix
out = reshape(bsxfun(#minus,t1,t1(1,:)),[],3); %// Desired output
a slightly shorter and vectorized way will be (if a is your matrix) :
b=a-kron(a(1:3:end,:),ones(3,1));
let's test:
a=[1 3 5
2 3 6
1 1 1
3 5 4
5 5 5
8 8 0]
a-kron(a(1:3:end,:),ones(3,1))
ans =
0 0 0
1 0 1
0 -2 -4
0 0 0
2 0 1
5 3 -4
Edit
Here's a bsxfun solution (less elegant, but hopefully faster):
a-reshape(bsxfun(#times,ones(1,3),permute(a(1:3:end,:),[2 3 1])),3,[])'
ans =
0 0 0
1 0 1
0 -2 -4
0 0 0
2 0 1
5 3 -4
Edit 2
Ok, this got me curios, as I know bsxfun starts to be less efficient for bigger array sizes. So I tried to check using timeit my two solutions (because they are one liners it's easy). And here it is:
range=3*round(logspace(1,6,200));
for n=1:numel(range)
a=rand(range(n),3);
f=#()a-kron(a(1:3:end,:),ones(3,1));
g=#() a-reshape(bsxfun(#times,ones(1,3),permute(a(1:3:end,:),[2 3 1])),3,[])';
t1(n)=timeit(f);
t2(n)=timeit(g);
end
semilogx(range,t1./t2);
So I didn't test for the for loop and Divkar's bsxfun, but you can see that for arrays smaller than 3e4 kron is better than bsxfun, and this changes at larger arrays (ratio of <1 means kron took less time given the size of the array). This was done at Matlab 2012a win7 (i5 machine)
Simple for loop. This does each 3x3 block separately.
A=randi(5,9,3)
B=A(1:3:end,:)
for i=1:length(A(:,1))/3
D(3*i-2:3*i,:)=A(3*i-2:3*i,:)-repmat(B(i,:),3,1)
end
D
Whilst it may be possible to vectorise this, I don't think the performance gains would be worth it, unless you will do this many times. For a 3000x3 matrix it doesn't take long at all.
Edit: In fact this seems to be pretty fast. I think that's because Matlab's JIT compilation can speed up simple for loops well.
You can do it using just indexing:
a(:) = a(:) - a(3*floor((0:numel(a)-1)/3)+1).';
Of course, the 3 above can be replaced by any other number. It works even if that number doesn't divide the number of rows.