I need to use KNN search to classify the testing data and find the classification rate.
Below is the matlab code:
for example:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
load fisheriris
x = meas(:,3:4); % x =all training data
y = [5 1.45;6 2;2.75 .75]; % y =3 testing data
[n,d] = knnsearch(x,y,'k',10); % find the 10 nearest neighbors to three testing data
for b=1:3
tabulate(species(n(b,:)))
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
The result was display in Command window:
tabulate(species(n(1,:)))
Value Count Percent
virginica 2 20.00%
versicolor 8 80.00%
tabulate(species(n(2,:)))
Value Count Percent
virginica 10 100.00%
tabulate(species(n(3,:)))
Value Count Percent
versicolor 7 70.00%
setosa 3 30.00%
If the testing points are 'Versicolor',the result of first and third testing point are classify correctly and second testing point is wrong one.So the classification rate is 2/3 x100%=66.7%.
Is there any idea to modify the matlab code to find the classification rate automatically and save the result into the Workspace?
In general you can find the number of correct predictions by using
sum(predicted_class == true_class) % For numerical data
sum(strcmp(predicted_class, true_class)) % For cellstrings
Or as a percentage
100 * sum(predicted_class == true_class) / length(predicted_class)
In the case of fisheriris the true class would be species. For your constructed data it would be
true_classes = [cellstr('versicolor'); cellstr('versicolor'); cellstr('versicolor')]
In the case of nearest neighbours, the true classes would be the class of the nearest neighbour(s). For a single neighbour:
predicted_class = species(n)
Where n is the index of the nearest neighbour as found by [n, d] = knnsearch(x, y).
sum(strcmp(predicted_class, true_class))
% result: 1
Which is indeed correct when you use only one neighbor.
Related
I have an issue with my model accuracy calculation. I used the code below:
y_train = [ 1 1 1 4 4 3 3 5 5 5 ]; % true labels for x_train
%x_test : has no true labels.
predictedLabel=[ 1 2 3 4 5 ]; % predicted labels for x_test
group=y_train ; % 10
grouphat=predictedLabel; % for test 5 test data
C=confusionmat(group,grouphat);
Accuracy = sum ( diag (C)) / sum (C (:)) ×100;
but I get the error:
Error using confusionmat (line 75)
G and GHAT need to have same number of rows
Do I get this error since the test data is more or less than the train? There is no true label for test data (semi supervised learning).
Your training labels and predicted labels are based on different inputs, so it doesn't make sense to compare them in a confusion matrix. From the confusionmat docs:
returns the confusion matrix C determined by the known and predicted groups
i.e. the known and predicted results for the same data.
Take this partly pseudo-code example, see the comments for details
% split your input data
trainData = data(1:100, :); % Training data
testData = data(101:120, :); % Testing data (mutually exclusive from training)
% Do some training (pseudo-code, not valid MATLAB)
% ** Let's assume that the labels are in column 1 **
model = train( trainData(:,1), trainData(:,2:end) );
% Test your model on the input data, excluding the actual labels in column 1
predictedLabels = model( testData(:,2:end) );
% Get the actual labels from column 1
actualLabels = testData(:,1);
% Note that size(predictedLabels) == size(actualLabels)
% Now we can do a confusion matrix
C = confusionmat( actualLabels, predictedLabels )
Assuming that I have a dataset of the following size:
train = 500,000 * 960 %number of training samples (vector) each of 960 length
B_base = 1000000*960 %number of base samples (vector) each of 960 length
Query = 1000*960 %number of query samples (vector) each of 960 length
truth_nn = 1000*100
truth_nn contains the ground truth neighbors in the form of the
pre-computed k nearest neighbors and their square Euclidean distance. So, the columns of truth_nn represent the k = 100 nearest neighbors. I am finding difficult to apply nearest neighbor search in the code snippet. Can somebody please show how to apply the ground truth neighbors truth_nn in finding the mean average precision-recall?
It will be of immense help if somebody can show with any small example by creating any data matrix, query matrix, and the ground truth neighbors in the form of the pre-computed k nearest neighbors and their square Euclidean distance. I tried creating a sample database.
Assume, the base data is
B_base = [1 1; 2 2; 3 2; 4 4; 5 6];
Query data is
Query = [1 1; 2 1; 6 2];
[neighbors distances] = knnsearch(a,b,'k',2);
would find 2 nearest neighbors.
Question 1: how do I create the truth data containing the ground truth neighbors and pre-computed k nearest neighbor distances?
This is called as the mean average precision recall. I tried implementing the knearest neighbor search and the average precision recall as follows but cannot understand (unsure) how to apply the ground truth table
Question 2:
I am trying to apply k nearest neighbor search by converting first the real-valued features into binary.
I am unable to apply the concept of k-nearest neighbor search for different values of k = 10,20,50 and to check how much data has been correctly recalled using the GIST database. In the GIST truth_nn() file, when I specify truth_nn(i,1:k) for a query vector i, the function AveragePrecision throws error. So, if somebody can show using any sample ground truth that is of similar structure to that in GIST, how to properly specify k and calculate the Average precision recall, then I shall be able to apply the solution to the GIST database. As of now, this is my approach and shall be of immense help if the correct way is provided using any example that will be easier for me to relate to the GIST database. The problem is on how I can find neighbors from the ground truth and compare it with the neighbors obtained after sorting the distances?
I am also interested on how I can apply pdist2() instead of the present distance calcualtion as it takes a long time.
numQueryVectors = size(Query,1);
%Calculate distances
for i=1:numQueryVectors,
queryMatrix(i,:)
dist = sum((repmat(queryMatrix(i,:),numDataVectors,1)-B_base ).^2,2);
[sortval sortpos] = sort(dist,'ascend');
neighborIds(i,:) = sortpos(1:k);
neighborDistances(i,:) = sqrt(sortval(1:k));
end
%Sorting calculated nearest neighbor distances for k = 50
%HOW DO I SPECIFY k = 50 in the ground truth, truth_nn
for i=1:numQueryVectors
AP(i) = AveragePrecision(neighborIds(i,:),truth_nn(i,:));
end
mAP = mean(AP);
function ap = AveragePrecision(rank_id, truth_id)
truth_num = length(truth_id);
truth_pos = zeros(truth_num,1);
for j=1:50 %% for k = 50 nearest neighbors
truth_pos(j) = find(rank_id == truth_id(j));
end
truth_pos = sort(truth_pos, 'ascend');
% compute average precision as the area below the recall-precision curve
ap = 0;
delta_recall = 1/truth_num;
for j=1:truth_num
p = j/truth_pos(j);
ap = ap + p*delta_recall;
end
end
end
UPDATE : Based on solution, I tried to calculate the mean average precision using the formula given formula hereand a reference code . But, not sure if my approach is correct because the theory says that I need to rank the returned queries based on the indices. I do not understand this fully. Mean average precision is required to judge the quality of the retrieval algortihm.
precision = positives/total_data;
recal = positives /(positives+negatives);
precision = positives/total_data;
recall = positives /(positives+negatives);
truth_pos = sort(positives, 'ascend');
truth_num = length(truth_pos);
ap = 0;
delta_recall = 1/truth_num;
for j=1:truth_num
p = j/truth_pos(j);
ap = ap + p*delta_recall;
end
ap
The value of ap = infinity , value of positive = 0 and negatives = 150. This means that knnsearch() does not work at all.
I think you are doing extra work. This process is very simple in matlab, you can also operate on entire arrays. This should be faster than for loops, and is a bit easier to read.
Your truth_nn and neighbors should have the same data, if there are no errors. There is one entry per row. Matlab already sorts the kmeans result in ascending order, so the column 1 is the closest neighbor, the second closest is column 2, 3rd closest is 3,.... There is no need to sort the data again.
Just compare truth_nn to neighbors to get your statistics. This is a simple example to show you how the program should go. It will not work on your data without some modification
%in your example this is provided, I created my own
truth_nn = [1,2;
1,3;
4,3];
B_base = [1 1; 2 2; 3 2; 4 4; 5 6];
Query = [1 1; 2 1; 6 2];
%performs k means
num_clusters = 2;
[neighbors distances] = knnsearch(B_base,Query,'k',num_clusters);
%--- output---
% neighbors = [1,2;
% 1,2; notice this doesn't match truth_nn 1,3
% 4,3]
% distances = [ 0 1.4142;
% 1.0000 1.0000;
% 2.8284 3.0000];
%computes statistics, nnz counts number of nonzero elements, in the first
%case every piece of data that matches
%NOTE1: the indexing on truth_nn (:,1:num_clusters ) it says all rows
% but only use the first num_clusters columns. This should
% prevent the dimension mistmatch error you were getting
positives = nnz(neighbors == truth_nn(:,1:num_clusters )); %result = 5
negatives = nnz(neighbors ~= truth_nn(:,1:num_clusters )); %result = 1
%NOTE1: I've switched this from truth_nn to neighbors, this helps
% when you cahnge num_neghbors
total_data = numel(neighbors); %result = 6
percent_incorrect = 100*(negatives / total_data); % 16.6666
percent_correct = 100*(positives / total_data); % 93.3333
I am training a one vs all svm classifier. I used a 200 by 459 matrix to train the classifier using VLFeat svm classifier. (http://www.vlfeat.org/matlab/vl_svmtrain.html)
[W B] = vl_svmtrain(train_image_feats', tmp', .00001);
where train_image_feats' is a 200 by 459 matrix, and tmp' is the label matrix which is 1 by 459 vector.
The above command trains the svm with no problem, but then to classify the scores obtained on the test matrix I get an error. The test matrix is obviously not of the same size as that of the training matrix.
scores(i, :) = W'*test_image_feats' + B;
Where test_image_feats' is a 200 by 90 matrix. scores is a 9 by 459 matrix. 9 Because there are 9 categories(labels) to classify and 459 are the number of training images.
The above command gives the error:
Subscripted assignment dimension mismatch.
Error in svm_classify (line 56) scores(i, :) = W'*test_image_feats'
+ B;
Edit: Full code added..
categories = unique(train_labels);
num_categories = length(categories);
scores = zeros([num_categories size(train_labels, 1)]); %train_labels is 459 by 1 size
for i=1:num_categories %there are 9 categories
tmp = strcmp(train_labels, categories{i});
tmp = tmp - (1-tmp);
[W B] = vl_svmtrain(train_image_feats', tmp', .00001);
scores(i, :) = W'*test_image_feats' + B;
end
predicted_categories = cell(size(train_labels));
parfor i=1:size(test_image_feats,1)
image_scores = scores(:, i);
label_index = find(image_scores==max(image_scores));
predicted_categories{i}=categories(label_index);
end
Conceptually you are training a model with 459 training samples to predict the scores of 90 test samples.
scores = zeros([num_categories size(train_labels, 1)]);
isn't right as it will be the size of the training set. In fact you don't have to care at all about the size of the training set, you could train the model with 20 or 20000 images the prediction step shouldn't be any different.
scores have to be defined with the test case in mind
scores = zeros([num_categories size(test_labels, 1)]);
When you used 459 for both it only worked because size(test_labels, 1) was equal to size(train_labels, 1)
The problem is not with your right hand side of the assignment, but with score(i,:): you are trying to assign a 9-by-90 size matrix into a single row of score - this simply won't fit.
I am working with hydrological time series data and I am attempting to construct Bootstrap Artificial Neural Network models. In order to provide an uncertainty assessment using confidence intervals, one must make sure when resampling/Bootstrapping the original time series data set, that every value in the original time series is held back at least twice within all bootstrap samples in order to calculate the variance and confidence intervals at that point in time.
To give some background:
I am using a hydrological time series that contains Standard Precipitation Index values at monthly time steps, this time series spans 429 (rows) x 1 (column), let's call this time series vector X. All elements/values of X are normalized and standardized between 0 and 1.
Time series X is then trained against some Target values (same length and conditions as X) in a Neural Network to produce new estimates of the Target values, we'll call this output vector, O (same length and conditions as X).
I am now to take X and resample it ii =1:1:200 times (i.e. Bootstrap size = 200) for length(429) with replacement. Let's call the matrix where all the bootstrap samples are placed, M. I use B = randsample(X, length(X), true) and fill M using a for loop such that M(:,ii) = B. Note: I also make sure to include rng('shuffle') after my randsample statement to keep the RNG moving to new states in hopes that it will provide more random results.
Now I am to test how "well" my data was resampled for use in creating confidence intervals.
My procedure is as follow:
Generate a for loop to create M using above procedure
Create a new variable Xc, this will hold all values of X that were not resampled in bootstrap sample ii for ii = 1:1:200
For j=1:1:length(X) fill 'Xc' using the Xc(j,ii) = setdiff(X, M(:,ii)), if element j exists in M(:,ii) fill Xc(j,ii) with NaN.
Xc is now a matrix the same size and dimensions as M. Count the amount of NaN values in each row of Xc and place in vector CI.
If any row in CI is > [Bootstrap sample size, for this case (200) - 1], then no confidence interval can be created at this point.
When I run this I find that the values chosen from my set X are almost always repeated, i.e. the same values of X are used to generate all the samples in M. It's roughly the same ~200 data points in my original time series that are always chosen to create the new bootstrap samples.
How can I effectively alter my program or use any specific functions that will allow me to avoid the negative solution in (5)?
Here is an example of my code, but please keep in mind the variables used in the script may differ from my text in here.
Thank you for the help and please see the code below.
for ii = 1:1:Blen % for loop to create 'how many bootstraps we desire'
B = randsample(Xtrain, wtrain, true); % bootstrap resamples of data series 'X' for 'how many elements' with replacement
rng('shuffle');
M(:,ii) = B; % creates a matrix of all bootstrap resamples with respect to the amount created by the for loop
[C,IA] = setdiff(Xtrain,B); % creates a vector containing all elements of 'Xtrain' that were not included in bootstrap sample 'ii' and the location of each element
[IAc] = setdiff(k,IA); % creates a vector containing locations of elements of 'Xtrain' used in bootstrap sample 'ii' --> ***IA + IAc = wtrain***
for j = 1:1:wtrain % for loop that counts each row of vector
if ismember(j,IA)== 1 % if the count variable is equal to a value of 'IA'
XC(j,ii) = Xtrain(j,1); % place variable in matrix for sample 'ii' in position 'j' if statement above is true
else
XC(j,ii) = NaN; % hold position with a NaN value to state that this value has been used in bootstrap sample 'ii'
end
dum1(:,ii) = wtrain - sum(isnan(XC(:,ii))); % dummy variable to permit transposing of 'IAs' limited by 'isnan' --> used to calculate amt of elements in IA
dum2(:,ii) = sum(isnan(XC(:,ii))); % dummy variable to permit transposing of 'IAsc' limited by 'isnan'
IAs = transpose(dum1) ; % variable counting amount of elements not resampled in 'M' at set 'i', ***i.e. counts 'IA' for each resample set 'i'
IAsc = transpose(dum2) ; % variable counting amount of elements resampled in 'M' at set 'i', ***i.e. counts 'IAc' for each resample set 'i'
chk = isnan(XC); % returns 1 in position of NaN and 0 in position of actual value
chks = sum(chk,2); % counts how many NaNs are in each row for length of time training set
chks_cnt = sum(chks(:)<(Blen-1)); % counts how many values of the original time series that can be provided a confidence interval, should = wtrain to provide complete CIs
end
end
This doesn't appear to be a problem with randsample, but rather a problem in your other code somewhere. randsample does the right thing. For example:
x = (1:10)';
nSamples = 10;
for iter = 1:100;
data(:,iter) = randsample(x,nSamples ,true);
end;
hist(data(:)) %this is approximately uniform
randsample samples quite randomly...
When doing:
load training.mat
training = G
load testing.mat
test = G
and then:
>> knnclassify(test.Inp, training.Inp, training.Ltr)
??? Error using ==> knnclassify at 91
The length of GROUP must equal the number of rows in TRAINING.
Since:
>> size(training.Inp)
ans =
40 40 2016
And:
>> length(training.Ltr)
ans =
2016
How can I give the second parameter of knnclassify (TRAINING) the training.inp 3-D matrix so that the number of rows will be 2016 (the third dimension)?
Assuming that your 3D data is interpreted as 40-by-40 matrix of features for each of the 2016 instances (third dimension), we will have to re-arrange it as a matrix of size 2016-by-1600 (rows are samples, columns are dimensions):
%# random data instead of the `load data.mat`
testing = rand(40,40,200);
training = rand(40,40,2016);
labels = randi(3, [2016 1]); %# a class label for each training instance
%# (out of 3 possible classes)
%# arrange data as a matrix whose rows are the instances,
%# and columns are the features
training = reshape(training, [40*40 2016])';
testing = reshape(testing, [40*40 200])';
%# k-nearest neighbor classification
prediction = knnclassify(testing, training, labels);