Multi-Data of K-means and SVM - matlab

I generate the multi data from mvnrnd. I could like use the K-means to clustering those data with 2 groups.And also want to know the accuracy of K-means,but i didn't know how to calculate that.How did i know the correct of k-means cluster to compare with the result and get the accuracy ?!
I have a multi data and the class , i know i could do the SVM. However,the accuracy of SVM was too low about 72% to 83%. I might have done some mistakes. I would like to hear some feedback. Thanks
n=1;mu1=[0,0,0];mu2=[1,1,1]; mu3=[2,2,2]; mu4=[3,3,3]; m=0.9;s=[1 m m ;m 1 m ; m m 1];
data1 = mvnrnd(mu1,s,1000); data2 = mvnrnd(mu2,s,1000);data3 = mvnrnd(mu3,s,1000);data4 = mvnrnd(mu4,s,1000);
all_data = [data1;data2;data3;data4];
[idx,ctrs,sumD,D] = kmeans(all_data,2,'distance','sqE','start','sample');
model = svmtrain(idx,all_data);
mu7=[0,0,0];mu8=[1,1,1];mu9=[2,2,2];mu10=[3,3,3];
data7=mvnrnd(mu7,s,1000);data8=mvnrnd(mu8,s,1000);data9=mvnrnd(mu9,s,1000);data10=mvnrnd(mu10,s,1000);
test_data = [data7;data8;data9;data10]; value = svmpredict(idx,test_data,model);
I want to know where my mistakes or something wrong of my code. I don't know why my accuracy is so low.I really want to improve my code. Thanks !!

To calculate accuracy of k-means algorithm You should have an a priori knowledge i.e. the reference class vector, like You have in SVM.
Despite that the k-means is unsupervised learning algorithm and You don't need to have class vector to classify the data, You need it to calculate the accuracy.
I have doubts for the method that You calculate model of the SVM as well. You use the indexes calculated from k-means algorithm witch has its own accuracy (right now it is not known). You use the modelled data for classification, so why won't you create your vector with classes?

Related

Data augmentation techniques for general datasets?

I am working in a machine learning problem and want to build neural network based classifiers on it in matlab. One problem is that the data is given in the form of features and number of samples is considerably lower. I know about data augmentation techniques for images, by rotating, translating, affine translation, etc.
I would like to know whether there are data augmentation techniques available for general datasets ? Like is it possible to use randomness to generate more data ? I read the answer here but I did not understand it.
Kindly please provide answers with the working details if possible.
Any help will be appreciated.
You need to look into autoencoders. Effectively you pass your data into a low level neural network, it applies a PCA-like analysis, and you can subsequently use it to generate more data.
Matlab has an autoencoder class as well as a function, that will do all of this for you. From the matlab help files
Generate the training data.
rng(0,'twister'); % For reproducibility
n = 1000;
r = linspace(-10,10,n)';
x = 1 + r*5e-2 + sin(r)./r + 0.2*randn(n,1);
Train autoencoder using the training data.
hiddenSize = 25;
autoenc = trainAutoencoder(x',hiddenSize,...
'EncoderTransferFunction','satlin',...
'DecoderTransferFunction','purelin',...
'L2WeightRegularization',0.01,...
'SparsityRegularization',4,...
'SparsityProportion',0.10);
Generate the test data.
n = 1000;
r = sort(-10 + 20*rand(n,1));
xtest = 1 + r*5e-2 + sin(r)./r + 0.4*randn(n,1);
Predict the test data using the trained autoencoder, autoenc .
xReconstructed = predict(autoenc,xtest');
Plot the actual test data and the predictions.
figure;
plot(xtest,'r.');
hold on
plot(xReconstructed,'go');
You can see the green cicrles which represent additional data generated with the auto-encoder.

feature extraction for classification matlab

I have a feature matrix 977x3
features = rand(977,3);
where each row is an observation and each column is a feature.
I calculate the pairwise distances between point with
dissimilarities = pdist(features);
and then I scale it with
feature_transf = mdscale(dissimilarities,3);
I use the new set of features (feature_transf) for classification.
Does it have any sense? I have good classification accuracy training a SVM with 10 fold cross validation.
Can you please tell me if it is methodologically incorrect?
Thanks,
Gabriele

Improve the accuracy performance on SVM

I am working on people detecting using two different features HOG and LBP. I used SVM to train the positive and negative samples. Here, I wanna ask how to improve the accuracy of SVM itself? Because, everytime I added up more positives and negatives sample, the accuracy is always decreasing. Currently my positive samples are 1500 and negative samples are 700.
%extract features
[fpos,fneg] = features(pathPos, pathNeg);
%train SVM
HOG_featV = loadingV(fpos,fneg); % loading and labeling each training example
fprintf('Training SVM..\n');
%L = ones(length(SV),1);
T = cell2mat(HOG_featV(2,:));
HOGtP = HOG_featV(3,:)';
C = cell2mat(HOGtP); % each row of P correspond to a training example
%extract features from LBP
[LBPpos,LBPneg] = LBPfeatures(pathPos, pathNeg);
LBP_featV = loadingV(LBPpos, LBPneg);
LBPlabel = cell2mat(LBP_featV(2,:));
LBPtP = LBP_featV(3,:);
M = cell2mat(LBPtP)'; % each row of P correspond to a training example
featureVector = [C M];
model = svmlearn(featureVector, T','-t 2 -g 0.3 -c 0.5');
Anyone knows how to find best C and Gamma value for improving SVM accuracy?
Thank you,
To find best C and Gamma value for improving SVM accuracy you typically perform cross-validation. In sum you can leave-one-out (1 sample) and test the VBM for that sample using the different parameters (2 parameters define a 2d grid). Typically you would test each decade of the parameters for a certain range. For example: C = [0.01, 0.1, 1, ..., 10^9]; G= [1^-5, 1^-4, ..., 1000]. This should also improve your SVM accuracy by optimizing the hyper-parameters.
By looking again to your question it seems you are using the svmlearn of the machine learning toolbox (statistics toolbox) of Matlab. Therefore you have already built-in functions for cross-validation. Give a look at: http://www.mathworks.co.uk/help/stats/support-vector-machines-svm.html
I followed ASantosRibeiro's method to optimize the parameters before and it works well.
In addition, you could try to add more negative samples until the proportion of the negative and positive reach 2:1. The reason is that when you implement real-time application, you should scan the whole image step by step and commonly the negative extracted samples would be much more than the people-contained samples.
Thus, add more negative training samples is a quite straightforward but effective way to improve overall accuracy(Both false positive and true negative).

How to use SVM in Matlab?

I am new to Matlab. Is there any sample code for classifying some data (with 41 features) with a SVM and then visualize the result? I want to classify a data set (which has five classes) using the SVM method.
I read the "A Practical Guide to Support Vector Classication" article and I saw some examples. My dataset is kdd99. I wrote the following code:
%% Load Data
[data,colNames] = xlsread('TarainingDataset.xls');
groups = ismember(colNames(:,42),'normal.');
TrainInputs = data;
TrainTargets = groups;
%% Design SVM
C = 100;
svmstruct = svmtrain(TrainInputs,TrainTargets,...
'boxconstraint',C,...
'kernel_function','rbf',...
'rbf_sigma',0.5,...
'showplot','false');
%% Test SVM
[dataTset,colNamesTest] = xlsread('TestDataset.xls');
TestInputs = dataTset;
groups = ismember(colNamesTest(:,42),'normal.');
TestOutputs = svmclassify(svmstruct,TestInputs,'showplot','false');
but I don't know that how to get accuracy or mse of my classification, and I use showplot in my svmclassify but when is true, I get this warning:
The display option can only plot 2D training data
Could anyone please help me?
I recommend you to use another SVM toolbox,libsvm. The link is as follow:
http://www.csie.ntu.edu.tw/~cjlin/libsvm/
After adding it to the path of matlab, you can train and use you model like this:
model=svmtrain(train_label,train_feature,'-c 1 -g 0.07 -h 0');
% the parameters can be modified
[label, accuracy, probablity]=svmpredict(test_label,test_feaure,model);
train_label must be a vector,if there are more than two kinds of input(0/1),it will be an nSVM automatically.
train_feature is n*L matrix for n samples. You'd better preprocess the feature before using it. In the test part, they should be preprocess in the same way.
The accuracy you want will be showed when test is finished, but it's only for the whole dataset.
If you need the accuracy for positive and negative samples separately, you still should calculate by yourself using the label predicted.
Hope this will help you!
Your feature space has 41 dimensions, plotting more that 3 dimensions is impossible.
In order to better understand your data and the way SVM works is to begin with a linear SVM. This tybe of SVM is interpretable, which means that each of your 41 features has a weight (or 'importance') associated with it after training. You can then use plot3() with your data on 3 of the 'best' features from the linear svm. Note how well your data is separated with those features and choose a basis function and other parameters accordingly.

Implementing Logistic Regression in MATLAB

I have a data set of 13 attributes where some are categorical and some are continuous (can be converted to categorical). I need to use logistic regression to create a model that predicts the responses of a row and find the prediction's accuracy, sensitivity, and specificity.
Can/Should I use cross validation to divide my data set and get the results?
Is there any code sample on how to go about doing this? (I'm new to all of this)
Should I be using mnrfit/mnrval or glmfit/glmval? What's the difference and how do I choose?
Thanks!
If you want to determine how well the model can predict unseen data you can use cross validation. In Matlab, you can use glmfit to fit the logistic regression model and glmval to test it.
Here is a sample of Matlab code that illustrates how to do it, where X is the feature matrix and Labels is the class label for each case, num_shuffles is the number of repetitions of the cross-validation while num_folds is the number of folds:
for j = 1:num_shuffles
indices = crossvalind('Kfold',Labels,num_folds);
for i = 1:num_folds
test = (indices == i); train = ~test;
[b,dev,stats] = glmfit(X(train,:),Labels(train),'binomial','logit'); % Logistic regression
Fit(j,i) = glmval(b,X(test,:),'logit')';
end
end
Fit is then the fitted logistic regression estimate for each test fold. Thresholding this will yield an estimate of the predicted class for each test case. Performance measures are then calculated by comparing the predicted class label against the actual class label. Averaging the performance measures across all folds and repetitions gives an estimate of the model performance on unseen data.
originally answered by BGreene on #Stats.SE.