matlab feature selection for regression models - matlab

I have a dataset of videos with x1,..,x24 features, and y as a score. I want to perform feature selection to reduce the error rate. I've found sequentialfs in Matlab but it's for classification.
Is there any feature selection function for regression? If it is could you provide an example of that?

It seems sequentialfs is for classification and regression, too.
Typical loss measures include sum of squared errors for regression
models (sequentialfs computes the mean-squared error in this case),
and the number of misclassified observations for classification models
(sequentialfs computes the misclassification rate in this case).

Related

Random component on fitcsvm/predict

I have a train dataset and a test dataset, and I train a SVM with fitcsvm in MATLAB. Then, I proceed to test the trained model with predict. I'm always using the same datasets, but I keep getting different AUCs for the same model, which makes me wonder where in the process is there a random component. Note that
I'm aware of the fact that formally there isn't such thing as ROC curve or AUC and
I'm not asking for the statistical background of the SVM problem. It is relative to the matlab implementation of the training/test algorithm. I expected to have the same results because the training algorithm is, afaik, a deterministic process.

Label Normalization in Deep Regression Networks

In regression problems there is typically no reason for normalizing/rescaling the labels (targets) before performing the optimization.
In deep regression networks there would be in principle no need to rescale since the last activation function is linear and the cost function is the mean squared difference of the predictions from the targets.
On the other hand, for numerical stability and performance of the training process, the values of the input and hidden units are kept in the range [-1,1] via feature normalization. Doesn't it mean that the labels should be rescaled to the range [-1,1] too?
traditionally in regression problems, you denormalize the output generated

h2o random forest calculating MSE for multinomial classification

Why is h2o.randomforest calculating MSE on Out of bag sample and while training for a multinomail classification problem?
I have done binary classification also using h2o.randomforest, there it used to calculate AUC on out of bag sample and while training but for multi classification random forest is calculating MSE which seems suspicious. Please see this screenshot.
My target variable was a factor containing 4 factor levels model1, model2, model3 and model4. In the screenshot you would also a confusion matrix for these factors.
Can someone please explain this behaviour?
Both binomial and multinomial classification display MSE, so you will see it in the Scoring History table for both models (highlighted training_MSE column).
H2O does not evaluate a multinomial AUC. A few evaluation methods exist, but there is not yet a single widely adopted method. The pROC package discusses the method of Hand and Till, but mentions that it cannot be plotted and results rarely tested. Log loss and classification error are still available, specific to classification, as each has standard methods of evaluation in a multinomial context.
There is a confusion matrix comparing your 4 factor levels, as you highlighted. Can you clarify what more you are expecting? If you were looking for four individual confusion matrices, the four-column table contains enough information that they could be computed.

Binary Classification Cost Function, Neural Networks

I've been tweaking the Deep Learning tutorial to train the weights of a Logistic Regression model for a binary classification problem and the tutorial uses the negative log-likelihood cost function below...
self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b)
def negative_log_likelihood(self, y):
return -T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), y])
However, my weights don't seem to be converging properly as my validation error increases over successive epochs.
I was wondering if I'm using the proper cost function to converge upon the proper weights. It might be useful to note that my two classes are very imbalanced and my predictors are already normalized
Few reasons I can think of are:
Your learning rate is too high
For binary classification, try squared error or a cross entropy error instead of negative log likelihood.
You are using just one layer. May be the dataset you are using requires more layers. So connect more hidden layers.
Play around with the number of layers and hidden units.

Matlab Question - Principal Component Analysis

I have a set of 100 observations where each observation has 45 characteristics. And each one of those observations have a label attached which I want to predict based on those 45 characteristics. So it's an input matrix with the dimension 45 x 100 and a target matrix with the dimension 1 x 100.
The thing is that I want to know how many of those 45 characteristics are relevant in my set of data, basically the principal component analysis, and I understand that I can do this with Matlab function processpca.
Could you please tell me how can I do this? Suppose that the input matrix is x with 45 rows and 100 columns and y is a vector with 100 elements.
Assuming that you want to construct a model of the 1x100 vector, based on the 45x100 matrix, I am not convinced that PCA will do what you think. PCA can be used to select variables for model estimation, but this is a somewhat indirect way to gather a set of model features. Anyway, I suggest reading both:
Principal Components Analysis
and...
Putting PCA to Work
...both of which provide code in MATLAB not requiring any Toolboxes.
Have you tried COEFF = princomp(x)?
COEFF = princomp(X) performs principal
components analysis (PCA) on the
n-by-p data matrix X, and returns the
principal component coefficients, also
known as loadings. Rows of X
correspond to observations, columns to
variables. COEFF is a p-by-p matrix,
each column containing coefficients
for one principal component. The
columns are in order of decreasing
component variance.
From your question I deduced you don't need to do it in MATLAB, but you just want to analyze your dataset. According to my opinion the key is visualization of the dependencies.
If you're not forced to do the analysis in MATLAB I'd suggest you try more specialized software something like WEKA (www.cs.waikato.ac.nz/ml/weka/) or RapidMiner (rapid-i.com). Both tools can provide PCA and other dimension reduction algorithms + they contain nice visualization tools.
Your use case sounds like a combination of Classification and Feature Selection.
Statistics Toolbox offers a lot of good capabilities in this area. The toolbox provides access to a number of classification algorithms including
Naive Bayes Classifiers Bagged
Decision Trees (aka Random Forests)
Binomial and Multinominal logistic regression
Linear Discriminant analysis
You also have a variety of options available for feature selection include
sequentialfs (forwards and backwards feature selection)
relifF
"treebagger" also supports options for feature selection and estimating variable importance.
Alternatively, you can use some of Optimization Toolbox's capabilities to write your own custom equations to estimate variable importance.
A couple monthes back, I did a webinar for The MathWorks titled "Compuational Statistics: Getting Started with Classification using MTALAB". You can watch the Webinar at
http://www.mathworks.com/company/events/webinars/wbnr51468.html?id=51468&p1=772996255&p2=772996273
The code and the data set for the examples is available at MATLAB Central
http://www.mathworks.com/matlabcentral/fileexchange/28770
With all this said and done, many people using Principal Component Analysis as a pre-processing step before applying classification algorithms. PCA gets used alot
When you need to extract features from images
When you're worried about multicollinearity
You should find correlation matrix. in the following example matlab finds correlation matrix with 'corr' function
http://www.mathworks.com/help/stats/feature-transformation.html#f75476