I have extracted the features from a biometric image, and then applied histogram remapping on the extracted features. I found that this step has increased the recognition accuracy. It has reduced the distances between samples from the same person, and increased the distances between samples from different persons. When I used the histogram matlab function to plot the distribution of features after mapping, I found that all histogram are the same for all images and all persons. Is there any Matlab plot function which I can use to show the small differences between features from the same person, and the large differences between features from different persons, after the mapping step, compared to the differences between features before mapping?
The attached file presents examples. In this file, note the following please:
Images 21 and 22 are for the same parson, while image 63 is for a different person
Knn distance between features after mapping for images 21 & 22 is 394.3704
compared to 992.2379 between 21 & 63, and 993.2462 between 22 & 63. Although this difference in distances, the three histograms are the same.
Matlab codes:
% to draw the histogram of the filtered image
histogram(filtered_image22)
% to measure knn distance
[a b]=size(filtered_image21)
filtered_image21_vector=reshape(filtered_image21, [1 a*b]);
[x z]=size(filtered_image63)
filtered_image63_vector=reshape(filtered_image63, [1 x*z]);
[idx D]=knnsearch(filtered_image21_vector,filtered_image63_vector)
%knnsearch(x,y) searches for the nearest neighbor (i.e., the closest
%point, row, or observation) in x to each point (i.e., row or observation)
%in the query data y using an exhaustive search or a Kd-tree.
%knnsearch returns Idx, which is a column vector of the indices representing the nearest neighbors.
% and D whcih contains the distances between each observation in Y that correspond to the closest
Related
Greetins,
How can I calculate how many distance calculations would need to be performed to classify the IRIS dataset using Nearest Mean Classifier.
I know that IRIS dataset has 4 features and every record is classified according to 3 different labels.
According to some textbooks, the calculation can be carried out as follow:
However, I am lost on these different notations and what does this equation mean. For example, what is s^2 is in the equation?
The notation is standard with most machine learning textbooks. s in this case is the sample standard deviation for the training set. It is quite common to assume that each class has the same standard deviation, which is why every class is assigned the same value.
However you shouldn't be paying attention to that. The most important point is when the priors are equal. This is a fair assumption which means that you expect that the distribution of each class in your dataset are roughly equal. By doing this, the classifier simply boils down to finding the smallest distance from a training sample x to each of the other classes represented by their mean vectors.
How you'd compute this is quite simple. In your training set, you have a set of training examples with each example belonging to a particular class. For the case of the iris dataset, you have three classes. You find the mean feature vector for each class, which would be stored as m1, m2 and m3 respectively. After, to classify a new feature vector, simply find the smallest distance from this vector to each of the mean vectors. Whichever one has the smallest distance is the class you'd assign.
Since you chose MATLAB as the language, allow me to demonstrate with the actual iris dataset.
load fisheriris; % Load iris dataset
[~,~,id] = unique(species); % Assign for each example a unique ID
means = zeros(3, 4); % Store the mean vectors for each class
for i = 1 : 3 % Find the mean vectors per class
means(i,:) = mean(meas(id == i, :), 1); % Find the mean vector for class 1
end
x = meas(10, :); % Choose a random row from the dataset
% Determine which class has the smallest distance and thus figure out the class
[~,c] = min(sum(bsxfun(#minus, x, means).^2, 2));
The code is fairly straight forward. Load in the dataset and since the labels are in a cell array, it's handy to create a new set of labels that are enumerated as 1, 2 and 3 so that it's easy to isolate out the training examples per class and compute their mean vectors. That's what's happening in the for loop. Once that's done, I choose a random data point from the training set then compute the distance from this point to each of the mean vectors. We choose the class that gives us the smallest distance.
If you wanted to do this for the entire dataset, you can but that will require some permutation of the dimensions to do so.
data = permute(meas, [1 3 2]);
means_p = permute(means, [3 1 2]);
P = sum(bsxfun(#minus, data, means_p).^2, 3);
[~,c] = min(P, [], 2);
data and means_p are the transformed features and mean vectors in a way that is a 3D matrix with a singleton dimension. The third line of code computes the distances vectorized so that it finally generates a 2D matrix with each row i calculating the distance from the training example i to each of the mean vectors. We finally find the class with the smallest distance for each example.
To get a sense of the accuracy, we can simply compute the fraction of the total number of times we classified correctly:
>> sum(c == id) / numel(id)
ans =
0.9267
With this simple nearest mean classifier, we have an accuracy of 92.67%... not bad, but you can do better. Finally, to answer your question, you would need K * d distance calculations, with K being the number of examples and d being the number of classes. You can clearly see that this is required by examining the logic and code above.
I am trying to use PCA to visualize my implementation of k-means algorithm. I am following the tutorial on Principal Component Coefficients, Scores, and Variances in this link.
I am using the following command: [coeff,score,~]=pca(X'); where X is my data.
My data is a 30 by 455 matrix, that is 30 features with 455 samples. I have successfully used the score parameter to create a 2D plot for visualization purposes. Now I wish to project the 30 dimensional center to that plain. I have tried coeff*centers(:,1) but I do not understand if this is the correct usage.
How do I project a new 30 dimensional point to the 2D of the first vs the second pca components?
I assume that by centers(:, 1) you denote a new observation. To express this observation in the principal components you should write
[coeff, score, ~, ~, ~, mu]=pca(X'); %return the estimated mean "mu"
tmp = centers(:, 1) - mu'; %remove mean since pca() by default centers data
coeff' * tmp; % the new observation expressed in the principal components
Note that you have to subtract the mean since pca() by default centers the data. Also, note the transpose ' on coeff. In fact it should be inv(coeff), but since coeff is an orthogonal matrix we can use transpose instead.
I want to carry out hierarchical clustering in Matlab and plot the clusters on a scatterplot. I have used the evalclusters function to first investigate what a 'good' number of clusters would be using different criteria values eg Silhouette, CalinskiHarabasz. Here is the code I used for the evaluation (x is my data with 200 observations and 10 variables):
E = evalclusters(x,'linkage','CalinskiHarabasz','KList',[1:10])
%store kmean optimal clusters
optk=E.OptimalK;
%save the outouts to a structure
clust_struc(1).Optimalk=optk;
clust_struc(1).method={'CalinskiHarabasz'}
I then used code similar to what I have found online:
gscatter(x(:,1),x(:,2),E.OptimalY,'rbgckmr','xod*s.p')
%OptimalY is a vector 200 long with the cluster numbers
and this is what I get:
My question may be silly, but I don't understand why I am only using the first two columns of data to produce the scatter plot? I realise that the clusters themselves are being incorporated through the use of the Optimal Y, but should I not be using all of the data in x?
Each row in x is an observation with properties in size(x,2) dimensions. All this dimensions are used for clustering x rows.
However, when plotting the clusters, we cannot plot more than 2-3 dimensions so we try to represent each element with its key properties. I'm not sure that x(:,1),x(:,2) are the best option, but you have to choose 2 for a 2-D plot.
Usually you would have some property of interest that you want to plot. Have a look at the example in MATLAB doc: the fisheriris data has 4 different variables - the length and width measurements from the sepals and petals of three species of iris flowers. It is up to you to decide which you want to plot against each other (in the example they choosed Petal Length and Petal Width).
Here is a comparison between taking Petals measurements and Sepals measurements as the axis for plotting the grouping:
A proof of concept prototype I have to do for my final year project is to implement K-Means Clustering on a big data set and display the results on a graph. I only know object-oriented languages like Java and C# and decided to give MATLAB a try. I notice that with a functional language the approach to solving problems is very different, so I would like some insight on a few things if possible.
Suppose I have the following data set:
raw_data
400.39 513.29 499.99 466.62 396.67
234.78 231.92 215.82 203.93 290.43
15.07 14.08 12.27 13.21 13.15
334.02 328.79 272.2 306.99 347.79
49.88 52.2 66.35 47.69 47.86
732.88 744.62 687.53 699.63 694.98
And I picked row 2 and 4 to be the 2 centroids:
centroids
234.78 231.92 215.82 203.93 290.43 % Centroid 1
334.02 328.79 272.2 306.99 347.79 % Centroid 2
I want to now compute the euclidean distances of each point to each centroid, then assign each point to it's closest centroid and display this on a graph. Let's say I want I want to classify the centroids as blue and green. How can I do this in MATLAB? If this was Java I would initialise each row as an object and add to separate ArrayLists (representing the clusters).
If rows 1, 2 and 3 all belong to the first centroid / cluster, and rows 4, 5 and 6 belong to the second centroid / cluster - how can I classify these to display them as blue or green points on a graph? I am new to MATLAB and really curious about this. Thanks for any help.
(To begin with, Matlab has a flexible distance measuring function, pdist2 and also kmeans implementation, but I'm assuming that you want to build your code from scratch).
In Matlab, you try to implement everything as matrix algebra, without loops over elements.
In your case, if R is the raw_data matrix and C is the centroids matrix,
you can shift the dimension that represents centroid number to the 3rd place by
permC=permute(C,[3 2 1]); Then the bsxfun function allows you to subtract C from R while expanding R's third dimension as necessary: D=bsxfun(#minus,R,permC). Element-wise square followed by summation across columns SqD=sum(D.^2,2) will give you the squared distances of each observation from each centroid. Performing all these operations within a single statement and shifting the third (centroid) dimension back to the 2nd place will look like this:
SqD=permute(sum(bsxfun(#minus,R,permute(C,[3 2 1])).^2,2),[1 3 2])
Picking the centroid of minimal distance is now straightforward: [minDist,minCentroid]=min(SqD,[],2)
If this looks complex, I recommend inspecting the product of each sub-step and reading the help of each command.
I have a population matrix of 5 images with 49 extracted salience features.
I want to calculate the cosine similarity in Matlab between a test image with the same extracted features 49.
1) Transform your images of size M lines X N columns in a vector M*N lines. Keep one image in a vector u and the other image in a vector v.
2) Evaluate: cosTheta = dot(u,v)/(norm(u)*norm(v)); [As far as I know there is no function in matlab that does that]
Usually people evaluate similarities among images using the projections of them on the eigenfaces. So, before doing that, people usually evaluate the eigenfaces.
You could use the matlab's built in function to get the cosine distance:
pdist([u;v],'cosine')
which returns the "One minus the cosine of the included angle between points". You could then subtract the answer from one to get the 'cosine of the included angle' (similarity), like this:
1 - pdist([u;v],'cosine')
Source: Pairwise distance between pairs of objects.