How to use SVM for unserpvised clustering scikit-learn - cluster-analysis

I would like to use SVM of scikit-learn library to do unserpervised clustering. I have been reading the documentation and many links in the net, but I can't find how to do that. Would you mind explaining me how to use scikit-learn for that and also the concept of SVM unserpervised clustering?

Related

AWS sagemaker RandomCutForest (RCF) vs scikit lean RandomForest (RF)?

Is there a difference between the two, or are they different names for the same algorithm?
RandomCutForest (RCF) is an unsupervised method primarily used for anomaly detection, while RandomForest (RF) is a supervised method that can be used for regression or classification.
For RCF, see documentation (here) and notebook example (here)

Face detection using hog features

Currently for face detection I am using svm classifier over HOG feature set.But I need to implement other classifiers over those HOG feature set and compare the results between them .What other classifiers can I use other than svm?
There are plenty of other classification algorithms, a simple logistic regression could be a starting point. You could use logistic regression; implement gaussian process based classifier or random forrests/decision trees and many others with respective pro's and cons.
See e.g. this link for an overview

text classification methods? SVM and decision tree

i have a training set and i want to use a classification method for classifying other documents according to my training set.my document types are news and categories are sports,politics,economic and so on.
i understand naive bayes and KNN completely but SVM and decision tree are vague and i dont know if i can implement this method by myself?or there is applications for using this methods?
what is the best method i can use for classifying docs in this way?
thanks!
Naive Bayes
Though this is the simplest algorithm and everything is deemed independent, in real text classification case, this method work great. And I would try this algorithm first for sure.
KNN
KNN is for clustering rather than classification. I think you misunderstand the conception of clustering and classification.
SVM
SVM has SVC(classification) and SVR(Regression) algorithms to do class classification and prediction. It sometime works good, but from my experiences, it has bad performance in text classification, as it has high demands for good tokenizers (filters). But the dictionary of the dataset always has dirty tokens. The accuracy is really bad.
Random Forest (decision tree)
I've never try this method for text classification. Because I think decision tree need several key nodes, while it's hard to find "several key tokens" for text classification, and random forest works bad for high sparse dimensions.
FYI
These are all from my experiences, but for your case, you have no better ways to decide which methods to use but to try every algorithm to fit your model.
Apache's Mahout is a great tool for machine learning algorithms. It integrates three aspects' algorithms: recommendation, clustering, and classification. You could try this library. But you have to learn some basic knowledge about Hadoop.
And for machine learning, weka is a software toolkit for experiences which integrates many algorithms.
Linear SVMs are one of the top algorithms for text classification problems (along with Logistic Regression). Decision Trees suffer badly in such high dimensional feature spaces.
The Pegasos algorithm is one of the simplest Linear SVM algorithms and is incredibly effective.
EDIT: Multinomial Naive bayes also works well on text data, though not usually as well as Linear SVMs. kNN can work okay, but its an already slow algorithm and doesn't ever top the accuracy charts on text problems.
If you are familiar with Python, you may consider NLTK and scikit-learn. The former is dedicated to NLP while the latter is a more comprehensive machine learning package (but it has a great inventory of text processing modules). Both are open source and have great community suport on SO.

SVM MATLAB Implementation

I have a homework to classify multi-class images with Support Vector Machines. I am not allowed to use any toolbox, I have to write SVM code by my self. I have to implement it in MATLAB. Since I am not familiar with MATLAB, I have some troubles about implementing.
Can you suggest me any pseudocode or paper that explains the svm implementation basically? I mean I know the theory of SVM but I am just not good at programming. Or any SVM code might be very helpful!
Thank you for your help in advance.
I like using LibSVM library. On its web pages you can find some useful hints and descriptions of the SVM. There is also beginner's guide to SVM classification. The source code itself should be available as well.
http://www.csie.ntu.edu.tw/~cjlin/libsvm/

term clustering library?

Does anybody know an open-source\free library that does term clustering?
Thanks,
yaniv
Apache Mahout provides algorithms for clustering.
Checkout NLTK. There's a number of clustering modules that might work for you.
WEKA has a whole suite of tools for text processing along with clustering.
If your in to python there is NLTK, as already mentioned by it's author, but there is also sklearn which provides much more than just clustering. (Link takes you to text applicable examples).
Python Scikit learn has some dedicated packages for text analysis. Besides they have a complete suite of Clustering Algorithms that includes K-means, AP, Mean shift, Spectral Clustering, Hierarchical Clustering and DBSCAN algorithms (with appropriate evaluation metrics). This may be helpful your term clustering task.
Link to Scikit Learn latest video tutorial
Link to Scikit Learn Book