Is there a way to optimise for high precision of one class in binary classification? - classification

binary cross entropy loss function optimises for accuracy, but what if i'm only interested in having high precision (not f1) of one of the two classes?

Assuming you want to tune you model with some sort of grid search CV you need to define your own scoring function which will be the precision for the one class you need.
Using sklearn's confusion matrix the precision of either 0's or 1's can be calculated the following way:
conf_mat = confusion_matrix(y_test, y_pred)
print("Precision of 0s: {:.2f}".format(conf_mat[0][0]/(conf_mat[:,0].sum())))
print("Precision of 1s: {:.2f}\n".format(conf_mat[1][1]/(conf_mat[:,1].sum())))
Then you need to read the documentation in sklearn regarding 3.3.1.2. Defining your scoring strategy from metric functions

Related

Change class weights and classification threshold to deal with unbalanced dataset

i'm working on my thesis and i used a Catboost classifier to perform a binary analysis on a very unbalanced dataset:
class0 = x number of samples
class1 = 10*x number of samples
In order to optimize the performance of the model i changed the weights of the classes, giving an higher weight to the minority class, and then i performed a grid search cross validation in which it is searched the set of hyperparameters that reduces the crossentropy loss associated to the catboost model.
At this point i also changed the classificaiton threshold by maximizing the G-mean metric (sqruare root of sensitivity multiplied by specificity).
In you opinion, if you are experts or informed about ensemble methods of type boosting, is it right to procede in this way to increase the performance of the model when the dataset is unbalanced? Maybe it would be enough just to change the weights and use the grid search instead of changing also the classification threshold?
Thank you in advance!

deterministic function in Matlab for clustering

I have been using Matlab built-in kmeans function to do clustering. Due to randomness used in the algorithm, the results are different if I set seeds differently. This is a little annoying. Is there a way to reduce the discrepancy of the clustering results? Alternatively, is there a deterministic function in Matlab for clustering?
If you have the image processing toolbox, there are tools which use Otsu's method, which is deterministic
https://en.wikipedia.org/wiki/Otsu's_method
If datain is your input data:
For 2 classes:
threshold = graythresh(datain);
threshold = the threshold value for splitting the data into 2 classes, normalized to [0,1]
For multiple classes:
thresholds = multithresh(datain,N);
N = number of thresholds
thresholds = 1xN vector of thresholds (not normalized)
It is normal.
The k-average algorithm creates new classes after each iteration, hence the results may be different.
For example: the algorithm is to determine which fruit is an apple that has a pear. It can classify an apple as a pear, but then all apples will be pears, while all pears will be apples.
I came up with some methods to reduce the discrepancy of the clustering results.
Put 'OnlinePhase','on' in the arguments in the kmeans. This will lead to a local min which is often the global min.
Put 'Replicates', 5 in the arguments. Here 5 can be replaced with an even larger number. It asks Matlab to do kmeans 5 times and choose the best result.
Put 'MaxIter', 1000 in the arguments. This will increase the max number of iterations from the default 100 to 1000, which could, but not likely, improve the accuracy.
As long as we aim for the best outcome from kmeans, we are more likely to get consistent results.

MATLAB computing Bayesian Information Criterion with the fit.m results

I'm trying to compute the Bayesian with results from fit.m
According to the Wikipedia, log-likelihood can be approximated (when noise is ~N(0,sigma^2)) as:
L = -(n/2)*log(2*pi*sigma^2) - (rss(2*sigma^2))
with n as the number of samples, k as the number of free parameters, and rss as residual sum of squares. And BIC is defined as:
-2*L + k*log(n)
But this is a bit different from the fitglm.m result even for simple polynomial models and the discrepancy seems to increase when higher order terms are used.
Because I want to fit Gaussian models and compute BICs of them, I cannot just use fitglm.m Or, is there any other way to write Gaussian model with the Wilkinson notation? I'm not familiar with the notation, so I don't know if it's possible.
I'm not 100% sure this is your issue, but I think your definition of BIC may be misunderstood.
The Bayesian Information Criterion (BIC) is an approximation to the log of the evidence, and is defined as:
where
is the data,
is the number of adaptive parameters of your model,
is the data size, and most importantly,
is the maximimum a posteriori estimate for your model / parameter set.
Compare for instance with the much simpler Akaike Information Criterion (AIK):
which relies on the usually simpler to obtain maximum likelihood estimate
of the model instead.
Your
is simply a parameter, which is subject to estimation. If the
you're using here is derived from the sample variance, for instance, then that simply corresponds to the
estimate, and not the
one.
So, your discrepancy may simply derive from the builtin function using the 'correct' estimate and you using the wrong one in your 'by-hand' calculations of the BIC.

Does sklearn support a cost matrix?

Is it possible to train classifiers in sklearn with a cost matrix with different costs for different mistakes? For example in a 2 class problem, the cost matrix would be a 2 by 2 square matrix. For example A_ij = cost of classifying i as j.
The main classifier I am using is a Random Forest.
Thanks.
The cost-sensitive framework you describe is not supported in scikit-learn, in any of the classifiers we have.
You could use a custom scoring function that accepts a matrix of per-class or per-instance costs. Here's an example of a scorer that calculates per-instance misclassification cost:
def financial_loss_scorer(y, y_pred, **kwargs):
import pandas as pd
totals = kwargs['totals']
# Create an indicator - 0 if correct, 1 otherwise
errors = pd.DataFrame((~(y == y_pred)).astype(int).rename('Result'))
# Use the product totals dataset to create results
results = errors.merge(totals, left_index=True, right_index=True, how='inner')
# Calculate per-prediction loss
loss = results.Result * results.SumNetAmount
return loss.sum()
The scorer becomes:
make_scorer(financial_loss_scorer, totals=totals_data, greater_is_better=False)
Where totals_data is a pandas.DataFrame with indexes that match the training set indexes.
You could always just look at your ROC curve. Each point on the ROC curve corresponds to a separate confusion matrix. So by specifying the confusion matrix you want, via choosing your classifier threshold implies some sort of cost weighting scheme. Then you just have to choose the confusion matrix that would imply the cost matrix you are looking for.
On the other hand if you really had your heart set on it, and really want to "train" an algorithm using a cost matrix, you could "sort of" do it in sklearn.
Although it is impossible to directly train an algorithm to be cost sensitive in sklearn you could use a cost matrix sort of setup to tune your hyper-parameters. I've done something similar to this using a genetic algorithm. It really doesn't do a great job, but it should give a modest boost to performance.
One way to circumvent this limitation is to use under or oversampling. E.g., if you are doing binary classification with an imbalanced dataset, and want to make errors on the minority class more costly, you could oversample it. You may want to have a look at imbalanced-learn which is a package from scikit-learn-contrib.
May not be direct to your question (since you are asking about Random Forest).
But for SVM (in Sklearn), you can utilize the class_weight parameter to specify the weights of different classes. Essentially, you will pass in a dictionary.
You might want to refer to this page to see an example of using class_weight.

SVM Classification with Cross Validation

I am new to using Matlab and am trying to follow the example in the Bioinformatics Toolbox documentation (SVM Classification with Cross Validation) to handle a classification problem.
However, I am not able to understand Step 9, which says:
Set up a function that takes an input z=[rbf_sigma,boxconstraint], and returns the cross-validation value of exp(z).
The reason to take exp(z) is twofold:
rbf_sigma and boxconstraint must be positive.
You should look at points spaced approximately exponentially apart.
This function handle computes the cross validation at parameters
exp([rbf_sigma,boxconstraint]):
minfn = #(z)crossval('mcr',cdata,grp,'Predfun', ...
#(xtrain,ytrain,xtest)crossfun(xtrain,ytrain,...
xtest,exp(z(1)),exp(z(2))),'partition',c);
What is the function that I should be implementing here? Is it exp or minfn? I will appreciate if you can give me the code for this section. Thanks.
I will like to know what does it mean when it says exp([rbf_sigma,boxconstraint])
rbf_sigma: The svm is using a gaussian kernel, the rbf_sigma set the standard deviation (~size) of the kernel. To understand how kernels work, the SVM is putting the kernel around every sample (so that you have a gaussian around every sample). Then the kernels are added up (sumed) for the samples of each category/type. At each point the type which sum is higher would be the "winner". For example if type A has a higher sum of these kernels at point X, then if you have a new datum to classify in point X, it will be classified as type A. (there are other configuration parameters that may change the actual threshold where a category is selected over another)
Fig. Analyze this figure from the webpage you gave us. You can see how by adding up the gaussian kernels on the red samples "sumA", and on the green samples "sumB"; it is logical that sumA>sumB in the center part of the figure. It is also logical that sumB>sumA in the outer part of the image.
boxconstraint: it is a cost/penalty over miss-classified data. During the training stage of the classifier, where you use the training data to adjust the SVM parameters, the training algorithm is using an error function to decide how to optimize the SVM parameters in an iterative fashion. The cost for a miss-classified sample is proportional to how far it is from the boundary where it would have been classified correctly. In the figure that I am attaching the boundary is the inner blue circumference.
Taking into account BGreene indications and from what I understand of the tutorial:
In the tutorial they advice to try values for rbf_sigma and boxconstraint that are exponentially apart. This means that you should compare values like {0.2, 2, 20, ...} (note that this is {2*10^(i-2), i=1,2,3,...}), and NOT like {0.2, 0.3, 0.4, 0.5} (which would be linearly apart). They advice this to try a wide range of values first. You can further optimize later FROM the first optimum that you obtained before.
The command "[searchmin fval] = fminsearch(minfn,randn(2,1),opts)" will give you back the optimum values for rbf_sigma and boxconstraint. Probably you have to use exp(z) because it affects how fminsearch increments the values of z(1) and z(2) during the search for the optimum value. I suppose that when you put exp(z(1)) in the definition of #minfn, then fminsearch will take 'exponentially' big steps.
In machine learning, always try to understand that there are three subsets in your data: training data, cross-validation data, and test data. The training set is used to optimize the parameters of the SVM classifier for EACH value of rbf_sigma and boxconstraint. Then the cross validation set is used to select the optimum value of the parameters rbf_sigma and boxconstraint. And finally the test data is used to obtain an idea of the performance of your classifier (the efficiency of the classifier is determined upon the test set).
So, if you start with 10000 samples you may divide the data for example as training(50%), cross-validation(25%), test(25%). So that you will sample randomly 5000 samples for the training set, then 2500 samples from the 5000 remaining samples for the cross-validation set, and the rest of samples (that is 2500) would be separated for the test set.
I hope that I could clarify your doubts. By the way, if you are interested in the optimization of the parameters of classifiers and machine learning algorithms I strongly suggest that you follow this free course -> www.ml-class.org (it is awesome, really).
You need to implement a function called crossfun (see example).
The function handle minfn is passed to fminsearch to be minimized.
exp([rbf_sigma,boxconstraint]) is the quantity being optimized to minimize classification error.
There are a number of functions nested within this function handle:
- crossval is producing the classification error based on cross validation using partition c
- crossfun - classifies data using an SVM
- fminsearch - optimizes SVM hyperparameters to minimize classification error
Hope this helps