Why the decision tree shows a correct classificationthe while some instances are being misclassified - classification

I am using WEKA, 10-fold cross validation or split 66% to create training and testing sets .. I used c4.5 (J48) as a classifier ..
I get in my results that some instances are misclassified, but, when I visualize the tree I see that based on the tree the instances should have been classified correctly !!!
I don't see this when the testing set is the same training set .. if the classifier decided to create such a tree why some instances are not being classified based on this tree ???
Thanks in advance.

It sounds like you are after a completely unpruned tree such that the Training Data should return a 100% accuracy.
The options that are likely causing the undesirable results are outlined below:
Unpruned is used to minimise the number of rules in the decision tree and can lead to lower generalisation error
minNumObj is used to determine the minimum number of cases required to make a rule. If this is higher than one, you could get some errors on the training data.
I wouldn't generally recommend using these options for a given problem, but if you are trying to obtain a 100% result on the training data, this would be the place to start.
Hope this helps!

Related

Why do I need to shuffle data, when I get the path to images?

train_image_paths = [str(path) for path in list(train_path.glob('*/*.jpeg'))]
random.shuffle(train_image_paths)
Above is a sample code you can see.
I have the same question in this case too:
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(64).shuffle(10000)
I don't understand why I need the shuffle in these cases.
In the obvious case, shuffling is helpful if your training data is sorted by class labels. By shuffling, you allow your model to "see" a wide range of data points each belonging to different classes in the context of classification. If the model goes through a sorted training data, your model runs the risk of overfitting to certain classes. In short, shuffling helps reduce variance and ensures that the train, test, and validation sets are representative of the true distribution.

Building weka classifier

I'm trying to build a classifier in Weka. I have two data sets: training and testing. The two files are identical: with the same number and type of attributes. However, the weka explorer is giving me error saying Train and test set are not compatible. How to resolve this error?
Here is a snap of the two sets:
training set
testing set
Searching through their wiki, here's what i found:
One of Weka's fundamental assumption is that the structure of the training and test sets are exactly the same. This does not only mean that you need the exact same number of attributes, but also the exact same type. In case of nominal attributes, you must ensure that the number of labels and the order of the labels are the same.
https://weka.wikispaces.com/Why+do+I+get+the+error+message+%27training+and+test+set+are+not+compatible%27%3F
They may seem to be the same but you never know, you should at least try one of the visual diff applications suggested by them.
I hope this helped you in some way.

How to automatically optimize a classifier in Weka in order to have a given class to contain 100 % sure data?

I have two (or three) classes and each classes can only possess one label.
I want to optimize (automatically if possible) parameters and thresholds of classifiers in order for my first class to contain only 100 % sure data. Even if it contains a small number of instances.
I don't mind for the remaining classes to contain false alarm or correct rejection.
I don't mind to have unclassified data.
I have already been searching on stackoverflow and on the weka's wiki but maybe my lack of knowledge concerning weka made me miss some keywords.
I also tried to perform the task with the well-known "iris" database but I think that in this case, any class can be 100 % sure.
Yet, I have only succeed in testing multiple classifiers and tuning them manually but without performing 100 % correct for my first class. (I checked this result in the confusion matrix given by weka's report.)
Somehow, I know it is possible for my class to contain 100% sure data because I managed to do it in Matlab with simple threshold set manually. But I would like to try out a bigger database, to obtain better threshold and to use the power of weka.
Any suggestions would be helpful, thanks !
You probably need the "Cost Sensitive Classifier" among "meta" classifiers.
If you are working in the Explorer, here is the dialog you get.
Choose the your "classifier" (something beyond ZeroR :) ).
Set your "cost matrix". For 2-class problem this will be 2x2 matrix.
By setting one non-diagonal component very large (>>1, let us say 1000) you ensure that misclassifying one class (your "first" class) is 1000 times more expensive than misclassifying another class. This should do the job.

How to use KNN to classify data in MATLAB?

I'm having problems in understanding how K-NN classification works in MATLAB.ยด
Here's the problem, I have a large dataset (65 features for over 1500 subjects) and its respective classes' label (0 or 1).
According to what's been explained to me, I have to divide the data into training, test and validation subsets to perform supervised training on the data, and classify it via K-NN.
First of all, what's the best ratio to divide the 3 subgroups (1/3 of the size of the dataset each?).
I've looked into ClassificationKNN/fitcknn functions, as well as the crossval function (idealy to divide data), but I'm really not sure how to use them.
To sum up, I wanted to
- divide data into 3 groups
- "train" the KNN (I know it's not a method that requires training, but the equivalent to training) with the training subset
- classify the test subset and get it's classification error/performance
- what's the point of having a validation test?
I hope you can help me, thank you in advance
EDIT: I think I was able to do it, but, if that's not asking too much, could you see if I missed something? This is my code, for a random case:
nfeats=60;ninds=1000;
trainRatio=0.8;valRatio=.1;testRatio=.1;
kmax=100; %for instance...
data=randi(100,nfeats,ninds);
class=randi(2,1,ninds);
[trainInd,valInd,testInd] = dividerand(1000,trainRatio,valRatio,testRatio);
train=data(:,trainInd);
test=data(:,testInd);
val=data(:,valInd);
train_class=class(:,trainInd);
test_class=class(:,testInd);
val_class=class(:,valInd);
precisionmax=0;
koptimal=0;
for know=1:kmax
%is it the same thing use knnclassify or fitcknn+predict??
predicted_class = knnclassify(val', train', train_class',know);
mdl = fitcknn(train',train_class','NumNeighbors',know) ;
label = predict(mdl,val');
consistency=sum(label==val_class')/length(val_class);
if consistency>precisionmax
precisionmax=consistency;
koptimal=know;
end
end
mdl_final = fitcknn(train',train_class','NumNeighbors',know) ;
label_final = predict(mdl,test');
consistency_final=sum(label==test_class')/length(test_class);
Thank you very much for all your help
For your 1st question "what's the best ratio to divide the 3 subgroups" there are only rules of thumb:
The amount of training data is most important. The more the better.
Thus, make it as big as possible and definitely bigger than the test or validation data.
Test and validation data have a similar function, so it is convenient to assign them the same amount
of data. But it is important to have enough data to be able to recognize over-adaptation. So, they
should be picked from the data basis fully randomly.
Consequently, a 50/25/25 or 60/20/20 partitioning is quite common. But if your total amount of data is small in relation to the total number of weights of your chosen topology (e.g. 10 weights in your net and only 200 cases in the data), then 70/15/15 or even 80/10/10 might be better choices.
Concerning your 2nd question "what's the point of having a validation test?":
Typically, you train the chosen model on your training data and then estimate the "success" by applying the trained model to unseen data - the validation set.
If you now would completely stop your efforts to improve accuracy, you indeed don't need three partitions of your data. But typically, you feel that you can improve the success of your model by e.g. changing the number of weights or hidden layers or ... and now a big loops starts to run with many iterations:
1) change weights and topology, 2) train, 3) validate, not satisfied, goto 1)
The long-term effect of this loop is, that you increasingly adapt your model to the validation data, so the results get better not because you so intelligently improve your topology but because you unconsciously learn the properties of the validation set and how to cope with them.
Now, the final and only valid accuracy of your neural net is estimated on really unseen data: the test set. This is done only once and is also useful to reveal over-adaption. You are not allowed to start a second even bigger loop now to prohibit any adaption to the test set!

Error Correcting Tournaments (ect) Multi Class Classification in Vowpal Wabbit

I tried to go through this paper which describes the ECT algorithm but could not make much out of it.
I know it is different from one-against-al (oaa) and even performs better than oaa.I wanted a simple explanation about how ect works.
ECT and Filter trees are useful (only) if you have a very big number of output labels (classes), let's say N=1000. With OAA (one-against-all), it would mean to do N binary classification tasks for each example (during both training and testing). With ECT you can make the prediction much faster: log(N). You can imagine Filter trees (which are the basis of ECT) as a decision tree where in each node you ask whether the example belongs to one set of labels or another set of labels (using all the features, unlike original decision trees).
In general, ECT is worse (in terms of loss or accuracy) than OAA (but in some cases it may be almost as good as OAA). With N=10 labels, you should try OAA first. With N>1000, OAA is too slow (and even the accuracy is low), you should try ECT (or --log_multi or --csoaa_ldf in VW, if you can preselect a smaller number of labels which are relevant for each example).
See http://cilvr.cs.nyu.edu/diglib/lsml/logarithmic.pdf