Comparing MATLAB AR model algorithms - matlab

I'm currently trying to build an AR model to approximate the error process in a sensor system. And I'm comparing the different parameter estimators in MATLAB. I have sets of data that i'm trying to match a model to, but I'm not too sure on the benefits/disadvantages of the algorithms available in the signal processing toolbox.
arburg: Autoregressive (AR) all-pole model parameters estimated using Burg method
arcov: Estimate AR model parameters using covariance method
armcov: Estimate AR model parameters using modified covariance method
aryule: Estimate autoregressive (AR) all-pole model using Yule-Walker method
If someone could give a more detailed description comparing the different algorithm and which one would best model existing data that would be very helpful.

Related

Is it possible to train classifier model in simulink?

I have a Simulink model where some features are extracted from the signals. I want to train a classifier model (LDA, kNN, etc.) with using these features. Is it possible to do it purely on Simulink? I want to do it on Simulink because, I am trying to simulate real-time online system that gives classification output causally.
I have tried to use fitcdscr and fitcknn functions in matlab function block. But they didn't work.

single class classifier using Gaussian Mixture Model

I am working on speaker identification project in matlab which my goal is to check whether a test speaker is my target speaker or not.
I used mfcc and lpcc and pitch as my features in this project and I used libsvm for single class classifier to train my model but my model accuracy even when I test it on my train data is quite low.
I use pre-implement mfcc and lpcc function which I am sure of correctness of this two features so I thought this might be a problem with classifier so I decide to use Gaussian Mixture Model as my classifier in this project however how can use Gaussian Mixture Model for single class classification?

Self organizing Maps and Linear vector quantization

Self organizing maps are more suited for clustering(dimension reduction) rather than classification. But SOM's are used in Linear vector quantization for fine tuning. But LVQ is a supervised leaning method. So to use SOM's in LVQ, LVQ should be provided with a labelled training data set. But since SOM's only do clustering and not classification and thus cannot have labelled data how can SOM be used as an input for LVQ?
Does LVQ fine tune the clusters in SOM?
Before using in LVQ should SOM be put through another classification algorithm so that it can classify the inputs so that these labelled inputs maybe used in LVQ?
It must be clear that supervised differs from unsupervised because in the first the target values are known.
Therefore, the output of supervised models is a prediction.
Instead, the output of unsupervised models is a label for which we don't know the meaning yet. For this purpose, after clustering, it is necessary to do the profiling of each one of those new label.
Having said so, you could label the dataset using an unsupervised learning technique such as SOM. Then, you should profile each class in order to be sure to understand the meaning of each class.
At this point, you can pursue two different path depending on what is your final objective:
1. use this new variable as a way for dimensionality reduction
2. use this new dataset featured with the additional variable representing the class as a labelled data that you will try to predict using the LVQ
Hope this can be useful!

How to predict labels for new data (test set) by the PartitionedEnsemble model in Matlab?

I trained a ensemble model (RUSBoost) for a binary classification problem by the function fitensemble() in Matlab 2014a. The training by this function is performed 10-fold cross-validation through the input parameter "kfold" of the function fitensemble().
However, the output model trained by this function cannot be used to predict the labels of new data if I use the predict(model, Xtest). I checked the Matlab documents, which says we can use kfoldPredict() function to evaluate the trained model. But I did not find any input of the new data through this function. Also, I found the structure of the trained model with cross-validation is different from that model without cross-validation. So, could anyone please advise me how to use the model, which is trained with cross-validation, to predict labels of new data? Thanks!
kfoldPredict() needs a RegressionPartitionedModel or ClassificationPartitionedEnsemble object as input. This already contains the models and data for kfold cross validation.
The RegressionPartitionedModel object has a field Trained, in which the trained learners that are used for cross validation are stored.
You can take any of these learners and use it like predict(learner, Xdata).
Edit:
If k is too large, it is possible that there is too little meaningful data in one or more iteration, so the model for that iteration is less accurate.
There are no general rules for k, but k=10 like in the MATLAB default is a good starting point to play around with it.
Maybe this is also interesting for you: https://stats.stackexchange.com/questions/27730/choice-of-k-in-k-fold-cross-validation

svm classification

I am a beginner in MATLAB and doing my Programming project in Digital Image Processing,i.e. Magnetic Resonance image classification using wavelet features+SVM+PCA+ANN. I executed the example SVM classification from MATLAB tool and modified that to fit my requirements. I am facing problems in storing more than one feature in an input vector and in giving new input to SVM. Please help.
Simply feed multidimensional feature data to svmtrain(Training, Group) function as Training parameter (Training can be matrix, each column represents separate feature). After that use svmclassify(SVMStruct, Sample) for testing data classification.