I am trying to implement fisher's linear discriminant function in matlab for K(Class) > 2, I am not really sure the algorithm for the K > 2 scenario. I know Matlab has inbuilt functions but I want to implement this without using them.
It will be great if someone could clear the algorithm.
Here is some sample pseudo code:
N = number of cases
c= number of classes
Priors = vector of prior probabilities for each case per class
Target = Target labels for each case per class
dimension of Data = Features x Cases.
Get target labels for each data point:
T = Targets(:,Cases); % Target labels for each case
Calculate the mean vector per class and the common covariance matrix:
classifier.u = [mean(Data(:,(T(1,:)==1)),2),mean_nan(Data(:,(T(2,:)==1)),2),....,mean_nan(Data(:,(T(2,:)==c)),2]; % Matrix of data means
classifier.invCV = cov(Data');
Get discriminant value using class mean vectors and common covariance matrix:
A1=classifier.u;
B1=classifier.invCV;
D = A1'*B1*Data-0.5*(A1'*B1.*A1')*ones(d,N)+log(Priors(:,Cases));
Function will produce c discriminant values. The case is then assigned to the class with the largest discriminant value.
Related
I have 7 classes within my training examples (labeled 1-7). I'm running logistic regression and I want to create my ROC curve for each of my classes.
To train my model and make a prediction, I have the following code:
Theta = zeros(k, n+1); %initialize theta
[Theta, costs] = gradientDescent(Theta, #(t)(CostFunc(t, X, Y, lambda)),...
#(t)(DerivOfCostFunc(t, X, Y, lambda)), alpha, iter_num);
%Make prediction with trained model
[scores,prediction] = predict(Theta, X_test); %X_test is the design matrix (ones on the first col)
Within the predict script, I have
scores = g(X*all_theta'); %this is the sigmoid function
[p_max, IndexOfMax]=max(scores, [], 2);
prediction = IndexOfMax;
Note that scores is a m by k matrix, where m is the number of training examples and k is the number of classes. Prediction is a m by 1 vector with numbers going from 1-7, based on the predicted class.
To create the ROC curve, for class 3 for example,
classNum=3;
for i=1:size(scores,1)
temp=scores(i,:);
diffscore(i,:)=temp(classNum)-max([temp(:,1:classNum-1),temp(:,classNum+1:end)]);
end
This last part I did because I read that I had to establish my class 3 as positive and the others as negative.
At last, I made my curve with the following code:
[xROC,yROC,~,auc] = perfcurve(y_test,diffscore,classNum);
%y_test contains my true labels, m by 1 column vector
However, when running the ROC curve for each of my classes, I get the same plot for all. They all have an AUC of 1. Based on some analysis, I know this is not correct but can't figure out in which part of the code I went wrong! Is there additional code I should add or should I need to modify any of my existing code?
I need to find the cosine similarity between two frequency vectors in MATLAB.
Example vectors:
a = [2,3,4,4,6,1]
b = [1,3,2,4,6,3]
How do I measure the cosine similarity between these vectors in MATLAB?
Take a quick look at the mathematical definition of Cosine similarity.
From the definition, you just need the dot product of the vectors divided by the product of the Euclidean norms of those vectors.
% MATLAB 2018b
a = [2,3,4,4,6,1];
b = [1,3,2,4,6,3];
cosSim = sum(a.*b)/sqrt(sum(a.^2)*sum(b.^2)); % 0.9436
Alternatively, you could use
cosSim = (a(:).'*b(:))/sqrt(sum(a.^2)*sum(b.^2)); % 0.9436
which gives the same result.
After reading this correct answer, to avoid sending you to another castle I've added another approach using MATLAB's built-in linear algebra functions, dot() and norm().
cosSim = dot(a,b)/(norm(a)*norm(b)); % 0.9436
See also the tag-wiki for cosine-similarity.
Performance by Approach:
sum(a.*b)/sqrt(sum(a.^2)*sum(b.^2))
(a(:).'*b(:))/sqrt(sum(a.^2)*sum(b.^2))
dot(a,b)/(norm(a)*norm(b))
Each point represents the geometric mean of the computation times for 10 randomly generated vectors.
If you have the Statistics toolbox, you can use the pdist2 function with the 'cosine' input flag, which gives 1 minus the cosine similarity:
a = [2,3,4,4,6,1];
b = [1,3,2,4,6,3];
result = 1-pdist2(a, b, 'cosine');
I am training a linear SVM classifier with the fitcsvm function in MATLAB:
cvFolds = crossvalind('Kfold', labels, nrFolds);
for i = 1:nrFolds % iterate through each fold
testIdx = (cvFolds == i); % indices of test instances
trainIdx = ~testIdx; % indices training instances
cl = fitcsvm(features(trainIdx,:),
labels(trainIdx),'KernelFunction',kernel,'Standardize',true,...
'BoxConstraint',C,'ClassNames',[0,1], 'Solver', solver);
[labelPred,scores] = predict(cl, features(testIdx,:));
eq = sum(labelPred==labels(testIdx));
accuracy(i) = eq/numel(labels(testIdx));
end
As visible from this part of code, the trained SVM model is stored in cl. Checking the model parameters in cl I do not see which parameters correspond to classifier weight - ie. the parameter for linear classifiers which reflects the importance of each feature. Which parameter represents the classification weights? I see in the MATLAB documentation "The vector β contains the coefficients that define an orthogonal vector to the hyperplane" - is hence cl.beta representing the classification weights?
As you can see in this documentation, the equation of a hyperplane in fitcsvm is
f(x)=x′β+b=0
And as you know, this equation shows following relationship:
f(x)=w*x+b=0 or f(x)=x*w+b=0
So, β is equal to w (weights).
I'm very new to machine learning, I'v read about Matlab's Statistics toolbox for hidden Markov model, I want to classify a given sequence of signals using it. I'v 3D co-ordinates in matrix P i.e [501x3] and I want to train model based on that. Evert complete trajectory ends on a specfic set of points, i.e at (0,0,0) where it achieves its target.
What is the appropriate Pseudocode/approach according to my scenario.
My Pseudocode:
501x3 matrix P is Emission matrix where each co-ordinate is state
random NxN transition matrix values (but i'm confused in it)
generating test sequence using the function hmmgenerate
train using hmmtrain(sequence,old_transition,old_emission)
give final transition and emission matrix to hmmdecode with an unknown sequence to give the probability (confusing also)
EDIT 1:
In a nutshell, I want to classify 10 classes of trajectories having each of [501x3] with HMM. I want to sampled 50 rows i.e [50x3] for each trajectory in order to build model. However, I'v murphyk's toolbox of HMM for such random sequences.
Here is a general outline of the approach to classifying d-dimensional sequences using hidden Markov models:
1) Training:
For each class k:
prepare an HMM model. This includes initializing the following:
a transition matrix: Q-by-Q matrix, where Q is the number of states
a vector of prior probabilities: Q-by-1 vector
the emission model: in your case the observations are 3D points so you could use a mutlivariate normal distribution (with specified mean vector and covariance matrix) or a Guassian mixture model (a bunch of MVN distributions combined using mixture coefficient)
after properly initializing the above parameters, you train the HMM model, feeding it the set of sequences belong to this class (EM algorithm).
2) Prediction
Next to classify a new sequence X:
you compute the log-likelihood of the sequence using each model log P(X|model_k)
then you pick the class that gave the highest probability. This is the class prediction.
As I mentioned in the comments, the Statistics Toolbox only implement discrete observation HMM models, so you will have to find another libraries or implement the code yourself. Kevin Murphy's toolboxes (HMM toolbox, BNT, PMTK3) are popular choices in this domain.
Here are some answers I posted in the past using Kevin Murphy's toolboxes:
Issue in training hidden markov model and usage for classification
Simple example/use-case for a BNT gaussian_CPD
The above answers are somewhat different from what you are trying to do here, but it's a good place to start.
The statement/case tells to build and train a hidden Markov's model having following components specially using murphyk's toolbox for HMM as per the choice:
O = Observation's vector
Q = States vector
T = vectors sequence
nex = number of sequences
M = number of mixtures
Demo Code (from murphyk's toolbox):
O = 8; %Number of coefficients in a vector
T = 420; %Number of vectors in a sequence
nex = 1; %Number of sequences
M = 1; %Number of mixtures
Q = 6; %Number of states
data = randn(O,T,nex);
% initial guess of parameters
prior0 = normalise(rand(Q,1));
transmat0 = mk_stochastic(rand(Q,Q));
if 0
Sigma0 = repmat(eye(O), [1 1 Q M]);
% Initialize each mean to a random data point
indices = randperm(T*nex);
mu0 = reshape(data(:,indices(1:(Q*M))), [O Q M]);
mixmat0 = mk_stochastic(rand(Q,M));
else
[mu0, Sigma0] = mixgauss_init(Q*M, data, 'full');
mu0 = reshape(mu0, [O Q M]);
Sigma0 = reshape(Sigma0, [O O Q M]);
mixmat0 = mk_stochastic(rand(Q,M));
end
[LL, prior1, transmat1, mu1, Sigma1, mixmat1] = ...
mhmm_em(data, prior0, transmat0, mu0, Sigma0, mixmat0, 'max_iter', 5);
loglik = mhmm_logprob(data, prior1, transmat1, mu1, Sigma1, mixmat1);
I have an assignment to implement MoG with EM in matlab. The assignment:
My code atm;
clear
clc
load('data2')
%% INITIALIZE
K = 20
pi = 0.01:((1-0.01)/K):1;
for k=1:20
sigma{k} = eye(2);
mu(k,:) = [rand(1),rand(1)];
end
%% Posterior over the laten variables
addition = 0;
for k =1:20
addition = addition + (pi(k)*mvnpdf(x,mu(k,:), sigma{k}));
end
test = 0;
for k =1:20
gamma{k} = (pi(k)*mvnpdf(x,mu(k), sigma{k})) ./ addition;
end
data has 1000 rows and 2 columns (so 1000 datapoints). My question is now how do I calculate the responsibilities. When I try to calculate the covariance matrix I get a 1x1000 matrix. While I believe the covariance matrix should be 2x2.
Unfortunately, I don't speak Matlab, so I can't really see where your code is incorrect, but I can answer generally (and maybe someone who knows Matlab can see if your code can be salvaged). Each datapoint has a gamma associated with it, which is the expectation of an indicator variable for each component in the mixture. Calculating them is pretty simple: for the i-th datapoint and the k-th component, gamma_ik is just the density of the k-th component at the i-th point, multiplied by the k-th mixture coefficient (the prior probability that the point came from the k-th component, which is pi in your assignment), normalised by this quantity computed over all k. Thus for each datapoint, you have a vector of responsibilities (of length k) with a sum of one.