I've used the following code to compute the Mutual Information and Chi Square values for feature selection in Sentiment Analysis.
MI = (N11/N)*math.log((N*N11)/((N11+N10)*(N11+N01)),2) + (N01/N)*math.log((N*N01)/((N01+N00)*(N11+N01)),2) + (N10/N)*math.log((N*N10)/((N10+N11)*(N00+N10)),2) + (N00/N)*math.log((N*N00)/((N10+N00)*(N01+N00)),2)
where N11,N01,N10 and N00 are the observed frequencies of the two features in my data set.
NOTE : I am trying to calculate the mutual information and Chi Squared values between 2 features and not the mutual information between a particular feature and a class. I'm doing this so I'll know if the two features are related in any way.
The Chi Squared formula I've used is :
E00 = N*((N00+N10)/N)*((N00+N01)/N)
E01 = N*((N01+N11)/N)*((N01+N00)/N)
E10 = N*((N10+N11)/N)*((N10+N00)/N)
E11 = N*((N11+N10)/N)*((N11+N01)/N)
chi = ((N11-E11)**2)/E11 + ((N00-E00)**2)/E00 + ((N01-E01)**2)/E01 + ((N10-E10)**2)/E10
Where E00,E01,E10,E11 are the expected frequencies.
By the definition of Mutual Information, a low value should mean that one feature does not give me information about the other and by the definition of Chi Square, a low value of Chi Square means that the two features must be independent.
But for a certain two features, i got a Mutual information score of 0.00416 and a Chi Square value of 4373.9. This doesn't make sense to me since the Mutual information score indicates the features aren't closely related but the Chi Square value seems to be high enough to indicate they aren't independent either. I think I'm going wrong with my interpretation
The values I got for the observed frequencies are
N00 = 312412
N01 = 276116
N10 = 51120
N11 = 68846
MI and Pearson's Large Sample Statistic are, under the usual conditions concerning sample size, directly proportional. This is quite well known. S proof is given here.
Morris, A.C. (2002) "An information theoretic measure of sequence recognition performance".
Can be downloaded from this page.
https://sites.google.com/site/andrewcameronmorris/Home/publications
Therefore, unless there is some mistake in your calculations, if one is high/low the other must be high/low.
The chi-squared independence test is examining raw counts while the mutual information score is examining only marginal and joint probability distributions. Hence, chi-squared also takes into account the sample size.
If the dependence between x and y is very subtle, then knowing one won't help very much in terms of predicting the other. However, as the size of the dataset increases we can become increasingly confident that some relationship exists.
You can try https://github.com/ranmoshe/Inference - it calculates both MI, and the p-value statistic using chi-square.
It also knows to calculate the degrees of freedom for each feature, including taking into account a conditional group (where the dof for a feature may be different between different values)
Related
In one hierarchical model, we have two hyer parameters: dnorm(A_mu, 0.25^-2) and dnorm (B_mu, 0.25^-2). In this case, 0.25 is the sd, I use the fixed number. A_mu and B_mu represent the mean of group level. After fitting the data by rjags, we get the distributions for each parameter. So I just directly compare the highest posterior density interval (HDI) of A_mu and B_mu? Do I need to calculate something using the sd(0.25)?
In another case, if the sd of two hyper parameters is not fixed, like that: dnorm(A_mu, A_sd) and dnorm (B_mu, B_sd). How can I compare the two hyper parameters and make a decision, e.g. this group is significantly different another group?
Remember that you are getting posterior distributions for A_mu and B_mu. This makes your comparison easy as you can have a look at 95% confidence intervals (CI) for the parameters (or pick a confidence value that satisfies your needs). I believe JAGS uses Gibbs sampling and so you should be able to get the raw samples from the posteriors for A_mu and B_mu. You can then ask "what is the probability that B_mu is greater than some value?" by calculating the percentage of posterior samples that are greater than that value. Alternatively, and in a similar way to frequentist Hypothesis testing, you can ask what is the probability that the mean of B_mu is a draw from the posterior of A_mu. So the key is just to directly use the samples from your posterior. I would recommend taking a look at Andrew Gelman's BDA3 textbook (Chapter 4) for a really good reference on these concepts.
A few things to keep in mind before drawing conclusions from the data: (1) you should always check the validity of your Markov Chains by evaluating things like autocorrelation (2) try to do a posterior predictive check to make sure your model is well fit to the data. If your model is poorly fit to the data then you can get very misleading results from the procedure above.
Consider the following examples of the Pearson correlation coefficient on sets of film ratings by users A and B:
A = [2,4,4,4,4]
B = [5,4,4,4,4]
pearson(A,B) = -1
A = [5,5,5,5,5]
B = [5,5,5,5,5]
pearson(A,B) = NaN
Pearson correlation seems widely used for calculating the similarity between two sets in collaborative filtering. However the sets above show high (even perfect) similarity, yet the outputs suggest the sets are negatively correlated (or an error is encountered due to div by zero).
I initially thought it was an issue in my implementation, but I've since validated it against a few online calculators.
If the outputs are correct, why is Pearson correlation considered a good choice for this application?
Person correlation measures association between two data sets i.e. how do they increase or decrease together.
In visual terms,how close do they lie on a straight line if one set is plotted on x-axis, and other on y-axis.
Example of positive correlation, irrespective of difference in scale of data sets:
For your case, the data sets are exactly similar, and hence their standard deviation is zero, which is a part of the product used in the denominator in pearson correlation calculation, hence it is undefined.
It means, it is not possible to predict the correlation i.e. how does the data increase or decrease along with other data.
In graph below, all data points lie on one point, hence predicting
the correlation pattern is not possible.
A very simple solution to this would be handle these cases seperately,
or if you want to go through the same flow, a neat hack would be to
make sure that standard deviation of any set is not zero.
Non zero standard deviation can be achieved by altering a single value of the set, with a minor amount, and since the data sets are highly correlated, it would give you the high correlation coefficient.
I would recommend that you study other measures of similarity like Euclidean distance, cosine similarity, adjusted cosine similarity too, and take informed decision on which suits your use cases more. It may be a hybrid approach too.
This tool was used to generate the graphs.
Pearson correlation divides by the standard deviation of the variables, which in your case is zero, therefore causing a divide by zero error. It is considered good because no real data set has a standard deviation of zero. In other words, complete uniform data sets are out of domain for the Pearson correlation coefficient, but that's no reason not to use it.
I have a matrix composed of 35 features, I need to reduce those
feature because I think many variable are dependent. I undertsood PCA
could help me to do that, so using matlab, I calculated:
[coeff,score,latent] = pca(list_of_features)
I notice "coeff" contains matrix which I understood (correct me if I'm wrong) have column with high importance on the left, and second column with less importance and so on. However, it's not clear for me which column on "coeff" relate to which column on my original "list_of_features" so that I could know which variable is more important.
PCA doesn't give you an order relation on your original features (which feature is more 'important' then others), rather it gives you directions in feature space, ordered according to the variance, from high variance (1st direction, or principle component) to low variance. A direction is generally a linear combination of your original features, so you can't expect to get information about a single feature.
What you can do is to throw away a direction (one or more), or in other words project you data into the sub-space spanned by a subset of the principle components. Usually you want to throw the directions with low variance, but that's really a choice which depends on what is your application.
Let's say you want to leave only the first k principle components:
x = score(:,1:k) * coeff(:,1:k)';
Note however that pca centers the data, so you actually get the projection of the centered version of your data.
I am trying to use kNN classifier to perform some supervised learning. In order to find the best number of 'k' of kNN, I used cross validation. For example, the following codes load some Matlab standard data and run the cross validation to plot various k values with respect to the cross validation error
load ionosphere;
[N,D] = size(X)
resp = unique(Y)
rng(8000,'twister') % for reproducibility
K = round(logspace(0,log10(N),10)); % number of neighbors
cvloss = zeros(numel(K),1);
for k=1:numel(K)
knn = ClassificationKNN.fit(X,Y,...
'NumNeighbors',K(k),'CrossVal','On');
cvloss(k) = kfoldLoss(knn);
end
figure; % Plot the accuracy versus k
plot(K,cvloss);
xlabel('Number of nearest neighbors');
ylabel('10 fold classification error');
title('k-NN classification');
The result looks like
The best k in this case is k=2 (it is not an exhaustive search). From the figure, we can see that the cross validation error goes up dramatically after k>50. It gets to a large error and become stable after k>100.
My question is what is the maximum k we should test in this kind of cross validation framework?
For example, there are two classes in the 'ionosphere' data. One class labeled as 'g' and one labeled as 'b'. There are 351 instances in total. For 'g' there are 225 cases and for 'b' there are 126 cases.
In the codes above, it chooses the largest k=351 to be tested. But should we only test from 1 to 126 or up to 225? Is there a relation between the test cases and the maximum number of k? Thanks. A.
The best way to choose a parameter in a classification problem, is to choose it by expertness. What you are doing certainly is not this. If your data is small enough to do a lot of classification with different values of parameters, you will do that, but to be reasonable, you need to show that the parameter you chose is not randomly chosen, you need to explain the behavior of plot you drawn.
In this case, the function is ascending, so you can tell 2 is the best choice.
In most cases you will not choose K more than 20, but there is no proof and you need to do the classification until you can proof your choice.
You don't want k to be too large (i.e. too close to the number of examples), because then the k neighborhood of each query example contains a large fraction of the space, so the prediction depends less and less on the actual location of the query and more on the overall statistics. This explains why the performance is not good for large k. Your classifier essentially chooses always 'g', and gets it wrong 126/351=35% as you see in the plot.
Theory suggests that k needs to grow as the number of labeled examples grow, but sub-linearly.
When you have lots of training data, you want k to be large because you want to have a good estimate of the likelihood of a point near the query point to get each label. This allows to imitate the maximum aposteriori decision rule (which is optimal, assuming you know the actual distribution).
So here are some practical tips:
Get more data if you can. Then run the experiment again.
Focus on small values of k. My bet is that k=3 is better than k=2. Usually for binary classification k is at least 3, and usually an odd number (to avoid ties).
The fact that you see that k=2 is better does not make sense. Therefore the only case in which k=1 is different than k=2 is when the 2 nearest neighbors have different labels. However, in this case the decision is made either randomly or arbitrarily (e.g. always choose 'g'). It depends on the implementation of the knn algorithm. My guess is that in the algorithm you are using the decision is fixed, and that in cases of a tie it chooses 'g' which just happens to be more likely overall. If you switch the roles of the labels you will probably see that k=1 is better than k=2.
Would be interesting to see the the plot for small values of k (e.g. 1 - 20).
References:
nearest neighbor classification
Increasing the number of neighbors to be taken into account during the classification makes your classifier a mean value choice. You only need to check the ratio of your classes to see that it is equal to the error rate.
Since you are using cross validation the k that corresponds to the minimum of your error rate is what you should select as value. In this case it is 3 if not mistaken.
Keep in mind that the cross validation parameter introduces bias in your selection of k. A more elaborate analysis is needed there, but your 10 should be fine for this case.
Im pretty much new to data mining and recommendation systems, now trying to build some kind of rec system for users that have such parameters:
city
education
interest
To calculate similarity between them im gonna apply cosine similarity and discrete similarity.
For example:
city : if x = y then d(x,y) = 0. Otherwise, d(x,y) = 1.
education : here i will use cosine similarity as words appear in the name of the department or bachelors degree
interest : there will be hardcoded number of interest user can choose and cosine similarity will be calculated based on two vectors like this:
1 0 0 1 0 0 ... n
1 1 1 0 1 0 ... n
where 1 means the presence of the interest and n is the total number of all interests.
My question is:
How to combine those 3 similarities in appropriate order? I mean just summing them doesnt sound quite smart, does it? Also I would like to hear comments on my "newbie similarity system", hah.
There are not hard-and-fast answers, since the answers here depend greatly on your input and problem domain. A lot of the work of machine learning is the art (not science) of preparing your input, for this reason. I could give you some general ideas to think about. You have two issues: making meaningful similarities out of each of these items, and then combining them.
The city similarity sounds reasonable but really depends on your domain. Is it really the case that being in the same city means everything, and being in neighboring cities means nothing? For example does being in similarly-sized cities count for anything? In the same state? If they do your similarity should reflect that.
Education: I understand why you might use cosine similarity but that is not going to address the real problem here, which is handling different tokens that mean the same thing. You need "eng" and "engineering" to match, and "ba" and "bachelors", things like that. Once you prepare the tokens that way it might give good results.
Interest: I don't think cosine will be the best choice here, try a simple tanimoto coefficient similarity (just size of intersection over size of union).
You can't just sum them, as I assume you still want a value in the range [0,1]. You could average them. That makes the assumption that the output of each of these are directly comparable, that they're the same "units" if you will. They aren't here; for example it's not as if they are probabilities.
It might still work OK in practice to average them, perhaps with weights. For example, being in the same city here is as important as having exactly the same interests. Is that true or should it be less important?
You can try and test different variations and weights as hopefully you have some scheme for testing against historical data. I would point you at our project, Mahout, as it has a complete framework for recommenders and evaluation.
However all these sorts of solutions are hacky and heuristic. I think you might want to take a more formal approach to feature encoding and similarities. If you're willing to buy a book and like Mahout, Mahout in Action has good coverage in the clustering chapters on how to select and encode features and then how to make one similarity out of them.
Here's the usual trick in machine learning.
city : if x = y then d(x,y) = 0. Otherwise, d(x,y) = 1.
I take this to mean you use a one-of-K coding. That's good.
education : here i will use cosine similarity as words appear in the name of the department or bachelors degree
You can also use a one-of-K coding here, to produce a vector of size |V| where V is the vocabulary, i.e. all words in your training data.
If you now normalize the interest number so that it always falls in the range [0,1], then you can use ordinary L1 (Manhattan) or L2 (Euclidean) distance metrics between your final vectors. The latter corresponds to the cosine similarity metric of information retrieval.
Experiment with L1 and L2 to decide which is best.