I am facing a problem of classification between 4 classes, I used for this classification Weka and I get a result in this form:
Correctly Classified Instances 3860 96.5 %
Incorrectly Classified Instances 140 3.5 %
Kappa statistic 0.9533
Mean absolute error 0.0178
Root mean squared error 0.1235
Relative absolute error 4.7401 %
Root relative squared error 28.5106 %
Total Number of Instances 4000
=== Detailed Accuracy By Class ===
TP Rate FP Rate Precision Recall F-Measure ROC Area Class
0.98 0.022 0.936 0.98 0.957 0.998 A
0.92 0.009 0.973 0.92 0.946 0.997 B
0.991 0.006 0.982 0.991 0.987 1 C
0.969 0.01 0.971 0.969 0.97 0.998 D
Weighted Avg. 0.965 0.012 0.965 0.965 0.965 0.998
=== Confusion Matrix ===
a b c d <-- classified as
980 17 1 2 | a = A
61 920 1 18 | b = B
0 0 991 9 | c = C
6 9 16 969 | d = D
My goal now is to draw (The Detection Error Trade-off) DET curve from results provided by Weka.
I found a MATLAB code that allows me to draw the DET curve, here are some line of code in this function:
Ntrials_True = 1000;
True_scores = randn(Ntrials_True,1);
Ntrials_False = 1000;
mean_False = -3;
stdv_False = 1.5;
False_scores = stdv_False * randn(Ntrials_False,1) + mean_False;
%-----------------------
% Compute Pmiss and Pfa from experimental detection output scores
[P_miss,P_fa] = Compute_DET(True_scores,False_scores);
the code of function Compute_DET is:
[Pmiss, Pfa] = Compute_DET(true_scores, false_scores)
num_true = max(size(true_scores));
num_false = max(size(false_scores));
total=num_true+num_false;
Pmiss = zeros(num_true+num_false+1, 1); %preallocate for speed
Pfa = zeros(num_true+num_false+1, 1); %preallocate for speed
scores(1:num_false,1) = false_scores;
scores(1:num_false,2) = 0;
scores(num_false+1:total,1) = true_scores;
scores(num_false+1:total,2) = 1;
scores=DETsort(scores);
sumtrue=cumsum(scores(:,2),1);
sumfalse=num_false - ([1:total]'-sumtrue);
Pmiss(1) = 0;
Pfa(1) = 1.0;
Pmiss(2:total+1) = sumtrue ./ num_true;
Pfa(2:total+1) = sumfalse ./ num_false;
return
but I have a problem with the translation of the meaning of different parameters. for example what is the significance of mean_False and stdv_False and what is the correspondence with the parameters of Weka?
Related
import weka.core.Instances.*
filename = 'C:\Users\Girish\Documents\MATLAB\DRESDEN_NSC.csv';
loader = weka.core.converters.CSVLoader();
loader.setFile(java.io.File(filename));
data = loader.getDataSet();
data.setClassIndex(data.numAttributes()-1);
%% classification
classifier = weka.classifiers.trees.J48();
classifier.setOptions( weka.core.Utils.splitOptions('-C 0.25 -M 2') );
classifier.buildClassifier(data);
classifier.toString()
ev = weka.classifiers.Evaluation(data);
v(1) = java.lang.String('-t');
v(2) = java.lang.String(filename);
v(3) = java.lang.String('-split-percentage');
v(4) = java.lang.String('66');
prm = cat(1,v(1:4));
ev.evaluateModel(classifier, prm)
Result:
Time taken to build model: 0.04 seconds
Time taken to test model on training split: 0.01 seconds
=== Error on training split ===
Correctly Classified Instances 767 99.2238 %
Incorrectly Classified Instances 6 0.7762 %
Kappa statistic 0.9882
Mean absolute error 0.0087
Root mean squared error 0.0658
Relative absolute error 1.9717 %
Root relative squared error 14.042 %
Total Number of Instances 773
=== Detailed Accuracy By Class ===
TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class
0.994 0.009 0.987 0.994 0.990 0.984 0.999 0.999 Nikon
1.000 0.000 1.000 1.000 1.000 1.000 1.000 1.000 Sony
0.981 0.004 0.990 0.981 0.985 0.980 0.999 0.997 Canon
Weighted Avg. 0.992 0.004 0.992 0.992 0.992 0.988 1.000 0.999
=== Confusion Matrix ===
a b c <-- classified as
306 0 2 | a = Nikon
0 258 0 | b = Sony
4 0 203 | c = Canon
=== Error on test split ===
Correctly Classified Instances 358 89.9497 %
Incorrectly Classified Instances 40 10.0503 %
Kappa statistic 0.8482
Mean absolute error 0.0656
Root mean squared error 0.2464
Relative absolute error 14.8485 %
Root relative squared error 52.2626 %
Total Number of Instances 398
=== Detailed Accuracy By Class ===
TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class
0.885 0.089 0.842 0.885 0.863 0.787 0.908 0.832 Nikon
0.993 0.000 1.000 0.993 0.997 0.995 0.997 0.996 Sony
0.796 0.060 0.841 0.796 0.818 0.749 0.897 0.744 Canon
Weighted Avg. 0.899 0.048 0.900 0.899 0.899 0.853 0.938 0.867
=== Confusion Matrix ===
a b c <-- classified as
123 0 16 | a = Nikon
0 145 1 | b = Sony
23 0 90 | c = Canon
import weka.core.Instances.*
filename = 'C:\Users\Girish\Documents\MATLAB\DRESDEN_NSC.csv';
loader = weka.core.converters.CSVLoader();
loader.setFile(java.io.File(filename));
data = loader.getDataSet();
data.setClassIndex(data.numAttributes()-1);
%% classification
classifier = weka.classifiers.trees.J48();
classifier.setOptions( weka.core.Utils.splitOptions('-C 0.1 -M 1') );
classifier.buildClassifier(data);
classifier.toString()
ev = weka.classifiers.Evaluation(data);
v(1) = java.lang.String('-t');
v(2) = java.lang.String(filename);
v(3) = java.lang.String('-split-percentage');
v(4) = java.lang.String('66');
prm = cat(1,v(1:4));
ev.evaluateModel(classifier, prm)
Result:
Time taken to build model: 0.04 seconds
Time taken to test model on training split: 0 seconds
=== Error on training split ===
Correctly Classified Instances 767 99.2238 %
Incorrectly Classified Instances 6 0.7762 %
Kappa statistic 0.9882
Mean absolute error 0.0087
Root mean squared error 0.0658
Relative absolute error 1.9717 %
Root relative squared error 14.042 %
Total Number of Instances 773
=== Detailed Accuracy By Class ===
TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class
0.994 0.009 0.987 0.994 0.990 0.984 0.999 0.999 Nikon
1.000 0.000 1.000 1.000 1.000 1.000 1.000 1.000 Sony
0.981 0.004 0.990 0.981 0.985 0.980 0.999 0.997 Canon
Weighted Avg. 0.992 0.004 0.992 0.992 0.992 0.988 1.000 0.999
=== Confusion Matrix ===
a b c <-- classified as
306 0 2 | a = Nikon
0 258 0 | b = Sony
4 0 203 | c = Canon
=== Error on test split ===
Correctly Classified Instances 358 89.9497 %
Incorrectly Classified Instances 40 10.0503 %
Kappa statistic 0.8482
Mean absolute error 0.0656
Root mean squared error 0.2464
Relative absolute error 14.8485 %
Root relative squared error 52.2626 %
Total Number of Instances 398
=== Detailed Accuracy By Class ===
TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class
0.885 0.089 0.842 0.885 0.863 0.787 0.908 0.832 Nikon
0.993 0.000 1.000 0.993 0.997 0.995 0.997 0.996 Sony
0.796 0.060 0.841 0.796 0.818 0.749 0.897 0.744 Canon
Weighted Avg. 0.899 0.048 0.900 0.899 0.899 0.853 0.938 0.867
=== Confusion Matrix ===
a b c <-- classified as
123 0 16 | a = Nikon
0 145 1 | b = Sony
23 0 90 | c = Canon
Same Result with both split options which is the result for default options i.e. -C 0.25 -M 2 for J48 classifier
please help!!! stuck here for a long time.Tried Different means but nothing worked for me
I have two vectors XYZ with different sizes. We can call it Data1 and Data2, where:
Data1 = [1000 3:55 2000; ...
950 2200 4.5; ...
1050 2350 5.5; ...
1025 2500 6; ...
1075 2600 7; ...
1000 2700 8];
Data2 = [1000 2650 7.95; ...
1000 2750 8.16; ...
1000 2700 9; ...
1025 3000 10];
The minimum acceptable difference between the points is 100 meters to the position (X, Y) and 0.2 for the depth (Z).
In this case, the points between the vectors will be P_Data1 = [1000 2700 8] and P_Data2 = [1000 2650 7.95], because the distance is acceptable and the depth is the nearest.
Does anyone know a function that can do this correlation to help me? I think in Matalab there is some function to the problem and a high performance, for I will do this calculation for thousands of points.
I'm currently using a nested loop, but the performance is very bad, because I calculate all distances, then all the differences between the depths for every point and filter the matrix.
In short, I want to find the points with lower and lower depths between two vectors of different sizes to the defined ranges.
I thank you for all the help!
Data1 = [950 2200 4.5; ...
1050 2350 5.5; ...
1025 2500 6; ...
1075 2600 7; ...
1000 2700 8];
Data2 = [1000 2650 7.95; ...
1000 2750 8.16; ...
1000 2700 9; ...
1025 3000 10];
vec1 = Data1(:,3);
vec2 = Data2(:,3);
[p,q] = meshgrid(vec1, vec2);
output1 = 0; %initial set
while output1 == 0
sub = [abs(p(:)-q(:))];
[M,I] = min(sub);
IndData1 = floor(I/4);
IndData2 = mod(I,IndData1);
%this basically computes the smallest possible Z
%Check if it works for condition 2:
checkcolumn1 = abs(Data1(IndData1,1) - Data2(IndData2,1));
checkcolumn2 = abs(Data1(IndData1,2) - Data2(IndData2,2));
if checkcolumn1 < 200 && checkcolumn2 <200
output1 = Data1(IndData1,:);
output2 = Data2(IndData2,:);
else
min(sub) = 1000000 %huge high number to basically remove the min
end
end
So this program should do what you ask, basically, it first calculates the minimum of the Z column, you can add a condition that it has to be less than 0.2 by the way, I just assumed that there has to be some value smaller than 0.2. Then it tries to see if the first condition can be fulfilled. Although it uses a loop for the search, it is actually very efficient, as it will jump out of the loop as soon as it finds the correct values.
So, I'm hoping this is a real dumb thing I'm doing, and there's an easy answer. I'm trying to train a 2x3x1 neural network to do the XOR problem. It wasn't working, so I decided to dig in to see what was happening. Finally, I decided to assign the weights my self. This was the weight vector I came up with:
theta1 = [11 0 -5; 0 12 -7;18 17 -20];
theta2 = [14 13 -28 -6];
(In Matlab notation). I deliberately tried to make no two weights be the same (barring the zeros)
And, my code, really simple in matlab is
function layer2 = xornn(iters)
if nargin < 1
iters = 50
end
function s = sigmoid(X)
s = 1.0 ./ (1.0 + exp(-X));
end
T = [0 1 1 0];
X = [0 0 1 1; 0 1 0 1; 1 1 1 1];
theta1 = [11 0 -5; 0 12 -7;18 17 -20];
theta2 = [14 13 -28 -6];
for i = [1:iters]
layer1 = [sigmoid(theta1 * X); 1 1 1 1];
layer2 = sigmoid(theta2 * layer1)
delta2 = T - layer2;
delta1 = layer1 .* (1-layer1) .* (theta2' * delta2);
% remove the bias from delta 1. There's no real point in a delta on the bias.
delta1 = delta1(1:3,:);
theta2d = delta2 * layer1';
theta1d = delta1 * X';
theta1 = theta1 - 0.1 * theta1d;
theta2 = theta2 - 0.1 * theta2d;
end
end
I believe that's right. I tested various parameters (of the thetas) with the finite differences method to see if they were right, and they seemed to be.
But, when I run it, it eventually just all boils down to returning all zeros. If I do xornn(1) (for 1 iteration) I get
0.0027 0.9966 0.9904 0.0008
But, if I do xornn(35)
0.0026 0.9949 0.9572 0.0007
(It's started a descent in the wrong direction) and by the time I get to xornn(45) I get
0.0018 0.0975 0.0000 0.0003
If I run it for 10,000 iterations, it just returns all 0's.
What is going on? Must I add regularization? I would have thought such a simple network wouldn't need it. But, regardless, why does it move away from an obvious good solution that I have hand fed it?
Thanks!
AAARRGGHHH! The solution was simply a matter of changing
theta1 = theta1 - 0.1 * theta1d;
theta2 = theta2 - 0.1 * theta2d;
to
theta1 = theta1 + 0.1 * theta1d;
theta2 = theta2 + 0.1 * theta2d;
sigh
Now tho, I need to figure out how I'm computing the negative derivative somehow when what I thought I was computing was the ... Never mind. I'll post here anyway, just in case it helps someone else.
So, z = is the sum of inputs to the sigmoid, and y is the output of the sigmoid.
C = -(T * Log[y] + (1-T) * Log[(1-y))
dC/dy = -((T/y) - (1-T)/(1-y))
= -((T(1-y)-y(1-T))/(y(1-y)))
= -((T-Ty-y+Ty)/(y(1-y)))
= -((T-y)/(y(1-y)))
= ((y-T)/(y(1-y))) # This is the source of all my woes.
dy/dz = y(1-y)
dC/dz = ((y-T)/(y(1-y))) * y(1-y)
= (y-T)
So, the problem, is that I accidentally was computing T-y, because I forgot about the negative sign in front of the cost function. Then, I was subtracting what I thought was the gradient, but was in fact the negative gradient. And, there. That's the problem.
Once I did that:
function layer2 = xornn(iters)
if nargin < 1
iters = 50
end
function s = sigmoid(X)
s = 1.0 ./ (1.0 + exp(-X));
end
T = [0 1 1 0];
X = [0 0 1 1; 0 1 0 1; 1 1 1 1];
theta1 = [11 0 -5; 0 12 -7;18 17 -20];
theta2 = [14 13 -28 -6];
for i = [1:iters]
layer1 = [sigmoid(theta1 * X); 1 1 1 1];
layer2 = sigmoid(theta2 * layer1)
delta2 = T - layer2;
delta1 = layer1 .* (1-layer1) .* (theta2' * delta2);
% remove the bias from delta 1. There's no real point in a delta on the bias.
delta1 = delta1(1:3,:);
theta2d = delta2 * layer1';
theta1d = delta1 * X';
theta1 = theta1 + 0.1 * theta1d;
theta2 = theta2 + 0.1 * theta2d;
end
end
xornn(50) returns 0.0028 0.9972 0.9948 0.0009 and
xornn(10000) returns 0.0016 0.9989 0.9993 0.0005
Phew! Maybe this will help someone else in debugging their version..
I hope someone can help me.
Lets say I have the following two vectors
t = [1 2 3 4 5];
m = [10 8 6 4 2];
plot(t,m)
And I want to find the slope of the linear fit (1. degree)
so I write:
polyfit(t,m,1)
I then obtain the following answer:
ans =
-2.0000 12.0000
Meaning that y = -2x + 12
How do I re-calculate the coefficient to a percentage slope?
The reason I am interested in this is that I want to discard all data that has a slope < 80% (and proceed with data with slope coefficients between 80% and 100%).
Assuming that you define percentage slope by the formula given in #2 of the Nopmenclature section on the Wikipedia Grade page, 100 * dy / dx, your percentage slope is just the coefficient of x^1, multiplied by 100. You can do a test to check for slopes < 80% as follows:
t = [1 2 3 4 5];
m = [10 8 6 4 2];
p = polyfit(t,m,1);
g = p(1) * 100;
if g > 80 && g < 100
% Do what you need to do...
end
In WEKA, I can easily find the TP Rate and total True Classified Instances from Confusion Matrix but is there any way to see exact number of tp and/or tn?
And do you know any way to find these values in matlab-anfis?
Since you are mentioning MATLAB, I'm assuming you are using the Java API to the Weka library to programmatically build classifiers.
In that case, you can evaluate the model using the weka.classifiers.Evaluation class, which provides all sorts of statistics.
Assuming you already have weka.jar file on the java class path (see javaaddpath function), here is an example in MATLAB:
%# data
fName = 'C:\Program Files\Weka-3-7\data\iris.arff';
loader = weka.core.converters.ArffLoader();
loader.setFile( java.io.File(fName) );
data = loader.getDataSet();
data.setClassIndex( data.numAttributes()-1 );
%# classifier
classifier = weka.classifiers.trees.J48();
classifier.setOptions( weka.core.Utils.splitOptions('-C 0.25 -M 2') );
classifier.buildClassifier( data );
%# evaluation
evl = weka.classifiers.Evaluation(data);
pred = evl.evaluateModel(classifier, data, {''});
%# display
disp(classifier.toString())
disp(evl.toSummaryString())
disp(evl.toClassDetailsString())
disp(evl.toMatrixString())
%# confusion matrix and other stats
cm = evl.confusionMatrix();
%# number of TP/TN/FP/FN with respect to class=1 (Iris-versicolor)
tp = evl.numTruePositives(1);
tn = evl.numTrueNegatives(1);
fp = evl.numFalsePositives(1);
fn = evl.numFalseNegatives(1);
%# class=XX is a zero-based index which maps to the following class values
classValues = arrayfun(#(k)char(data.classAttribute.value(k-1)), ...
1:data.classAttribute.numValues, 'Uniform',false);
The output:
J48 pruned tree
------------------
petalwidth <= 0.6: Iris-setosa (50.0)
petalwidth > 0.6
| petalwidth <= 1.7
| | petallength <= 4.9: Iris-versicolor (48.0/1.0)
| | petallength > 4.9
| | | petalwidth <= 1.5: Iris-virginica (3.0)
| | | petalwidth > 1.5: Iris-versicolor (3.0/1.0)
| petalwidth > 1.7: Iris-virginica (46.0/1.0)
Number of Leaves : 5
Size of the tree : 9
Correctly Classified Instances 147 98 %
Incorrectly Classified Instances 3 2 %
Kappa statistic 0.97
Mean absolute error 0.0233
Root mean squared error 0.108
Relative absolute error 5.2482 %
Root relative squared error 22.9089 %
Coverage of cases (0.95 level) 98.6667 %
Mean rel. region size (0.95 level) 34 %
Total Number of Instances 150
=== Detailed Accuracy By Class ===
TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class
1.000 0.000 1.000 1.000 1.000 1.000 1.000 1.000 Iris-setosa
0.980 0.020 0.961 0.961 0.961 0.955 0.990 0.969 Iris-versicolor
0.960 0.010 0.980 0.980 0.980 0.955 0.990 0.970 Iris-virginica
Weighted Avg. 0.980 0.010 0.980 0.980 0.980 0.970 0.993 0.980
=== Confusion Matrix ===
a b c <-- classified as
50 0 0 | a = Iris-setosa
0 49 1 | b = Iris-versicolor
0 2 48 | c = Iris-virginica