I have a large features dataset of around 111 Mb for classification with 217000 data points and each point has 1760000 features point. When used in training with SVM in MATLAB, it takes a lot of time.
How can be this data processed in MATLAB.
It depends on what sort of SVM you are building.
As a rule of thumb, with such big feature sets you need to look at linear classifiers, such as an SVM with no/the linear kernel, or logistic regression with various regularizations etc.
If you're training an SVM with a Gaussian kernel, the training algorithm has O(max(n,d) min (n,d)^2) complexity, where n is the number of examples and d the number of features. In your case it ends up being O(dn^2) which is quite big.
Related
I am classifying gender using a KNN classifier.
I want to add an SVM classifier instead of KNN classifier with the same labels of 0 and 1 (0 for women and 1 for men)
I have a matrix of test examples, sample, a matrix of training examples, training, and a vector with the labels for the training examples group. I want class, a vector of the labels for the test examples.
class = knnclassify(sample, training, group);
if class==1
x='Male';
else
x='Female';
end
How can I change this code to find class using an SVM?
To train an SVM, you will need the Statistics and Machine Learning Toolbox.
The biggest difference between the knnclassify and using an SVM classifier is that training and classifying new labels will be two separate steps.
1. Train your SVM : fitcsvm
This step teaches the classifier how to distinguish between your two classes. It is learning a linear separator (or a weighted combination of the features) which has the largest margin between positive and negative examples. All the examples you give it need to have ground truth labels.
SVM's have many tunable parameters that you can adjust during the training step. There are several good tutorials in the Matlab documentation which describe the differences, but for the most basic version, you can just use your training examples
model = fitcsvm(training,group);
This model will be used in the next step.
2. Classify new examples : predict
To classify your new example, run
class = predict(sample, model);
Notes:
Using your model, you can also run cross-fold validation, useful for accuracy analysis.
cvModel = crossval(model);
classError = kfoldLoss(cvModel);
You can also save your model, like any other Matlab variable for future use.
save('model.m', 'model');
knnclassify comes from the bioinformatics toolbox. In the Statistics and Machine Learning Toolbox, there is also a KNN model which you train with fitcknn and classify with predict. The benefit is that you can reuse your KNN model with several sets of data, compare cross-validation results, and save it for future use.
I have read this line about neural networks :
"Although the perceptron rule finds a successful weight vector when
the training examples are linearly separable, it can fail to converge
if the examples are not linearly separable.
My data distribution is like this :The features are production of rubber ,consumption of rubber , production of synthetic rubber and exchange rate all values are scaled
My question is that the data is not linearly separable so should i apply ANN on it or not? is this a rule that it should be applied on linerly separable data only ? as i am getting good results using it (0.09% MAPE error) . I have also applied SVM regression (fitrsvm function in MATLAB)so I have to ask can SVM be used in forecasting /prediction or it is used only for classification I haven't read anywhere about using SVM to forecast , and the results for SVM are also not good what can be the possible reason?
Neural networks are not perceptrons. Perceptron is on of the oldest ideas, which is at most a single building block of neural networks. Perceptron is designed for binary, linear classification and your problem is neither the binary classification nor linearly separable. You are looking at regression here, where neural networks are a good fit.
can SVM be used in forecasting /prediction or it is used only for classification I haven't read anywhere about using SVM to forecast , and the results for SVM are also not good what can be the possible reason?
SVM has regression "clone" called SVR which can be used for any task NN (as a regressor) can be used. There are of course some typical characteristics of both (like SVR being non parametric estimator etc.). For the task at hand - both approaches (as well as any another regressor, there are dozens of them!) is fine.
I am solving a classification problem. I train my unsupervised neural network for a set of entities (using skip-gram architecture).
The way I evaluate is to search k nearest neighbours for each point in validation data, from training data. I take weighted sum (weights based on distance) of labels of nearest neighbours and use that score of each point of validation data.
Observation - As I increase the number of epochs (model1 - 600 epochs, model 2- 1400 epochs and model 3 - 2000 epochs), my AUC improves at smaller values of k but saturates at the similar values.
What could be a possible explanation of this behaviour?
[Reposted from CrossValidated]
To cross check if imbalanced classes are an issue, try fitting a SVM model. If that gives a better classification(possible if your ANN is not very deep) it may be concluded that classes should be balanced first.
Also, try some kernel functions to check if this transformation makes data linearly separable?
I've been tweaking the Deep Learning tutorial to train the weights of a Logistic Regression model for a binary classification problem and the tutorial uses the negative log-likelihood cost function below...
self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b)
def negative_log_likelihood(self, y):
return -T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), y])
However, my weights don't seem to be converging properly as my validation error increases over successive epochs.
I was wondering if I'm using the proper cost function to converge upon the proper weights. It might be useful to note that my two classes are very imbalanced and my predictors are already normalized
Few reasons I can think of are:
Your learning rate is too high
For binary classification, try squared error or a cross entropy error instead of negative log likelihood.
You are using just one layer. May be the dataset you are using requires more layers. So connect more hidden layers.
Play around with the number of layers and hidden units.
Suppose I have very big train set so that Matlab hangs while training or there is insufficient memory to hold train set.
Is it possible to split the training set into parts and train the network by parts?
Is it possible to train the network with one sample at a time (one by one)?
You can just manually divide dataset into batches and train them one after one:
for bn = 1:num_batches
inputs = <get batch bn inputs>;
targets = <get batch bn targets>;
net = train(net, inputs, targets);
end
Though batch size should be greater than 1, but anyway that should reduce memory consumtion for training.
In case of trainlm training alogrithm, net.efficiency.memoryReduction optim could help.
Also instead of default trainlm algorithm you can try less memory consuming ones like trainrp.
For details on training algorithms check matlab documentation page.
I assumed above that you are using corresponding matlab toolbox for neural networks.
Regarding training one sample at a time you could try googling for stochastic gradient descent algorithm. But, it looks like it is not in default set of training algorithm in the toolbox.