I am attempting to use KMeans clustering to create a feature for an XGBOOST regression. The problem is, I am not sure if there is data leakage. It is data with a date, so right now I am clustering on the first 70% of data sorted by date, and using the same as my training set.
Included in the clustering is my target variable. Using the cluster as a feature provides a huge boost to test scores, so I worry that this is causing data leakage. However, the clusters used for test scores are unseen data in the test set.
Is this valid, or is it causing data leakage? Thank you
Related
I am running a few classification models like logistic regression and catboosting. I have taken away part of the train set as unseen data .
When I train both on train and unseen data and get the metrics using log regression I am getting all the metrics like accuracy , AUC,F1,Recall all greater than 0.90. As it's a class imbalance problem i have even balanced the classes using smote .And also I have used z score to normalise all variables
Where the model performs well on train and unseen data and on test data , when I actually run on the set of data ( unlabelled) which I want to predict model is only giving me 10 1s. And rest 150k 0s
Could there be really an issue with my model or it is indeed the way the data is ?
I use Weka tool for data mining purpose of mine. When I feed the data set and cluster it using the SimpleKMeans algorithm it displays following statement.
Incorrectly clustered instances : 857.0 69.7883 %
Is it ok to proceed with that percentage ? If not please let me know how to reduce that percentage.
If you have labels, then use them, and do not use clustering at all.
Clustering is meant for data where you do not have labels.
How do you plan to proceed?
I am trying to classify the four groups of images using SVM method, by randomly selecting training and testing data each time. When T run the program the performance varies due to randomly selecting data. How to get accurate performance of my algorithm and also how to calculate training and testing accuracy?
The formula I am using for performance is
Performance = sum(PredictedLabels == test_labels) / numel(PredictedLabels)
I am using multisvm function for classification.
My suggestion:
Actually the performance measure is acceptable, though there are some other slightly better choices like #Dan has mentioned.
More importantly, you need to deal with randomness.
1) Everytime you select your training data, test the trained model with multiple randomized test data and average the accuracy. (e.g. 10 times or so)
2) Use multiple trained model and average the performance to get general performance.
Remark:
1) You need to make sure the training data and test data do not overlap. Or it is no longer test data.
2) It is better to have the training data have the same number of samples from each class label. This means you can partition your dataset in advance.
These days I am using some clustering algorithm and I just wanted to ask a question related to this field. Maybe those who are working in this field already have this answer.
During clustering I need to have some training data which I am going to cluster. The number of iterations (e.x. K-Means algorithm) is depended on the number of training data(number of vectors). Is there any method to find the most important data from training data. What I mean is: Instead of training the K-Means with all the data maybe there is a method to find just the important vectors (those vectors who affect most the clusters) and use these "important" vectors(from training data) to traing the algorithm.
I hope you understood me.
Thank You for reading and trying to answer.
"Training" and "Test" data is a concept from classification, not from cluster analysis.
K-means is a statistical method. If you want to speed it up, running it on a large enough random sample should give you nearly the same result.
I'm newbie in research area of data mining (text clustering) and i have couple question regarding to training and test datasets.
Is that clustering need training and testing datasets?
why we need to separate into training and test datasets?
Sorry for the rookie question hope expert in this group can help me.
As your question is on clustering:
In cluster analysis, there usually is no training or test data split.
Because you do cluster analysis when you do not have labels, so you cannot "train".
Training is a concept from machine learning, and train-test splitting is used to avoid overfitting.
But if you are not learning labels, you cannot overfit.
Properly used cluster analysis is a knowledge discovery method. You want to discover some new structure in your data, not rediscover something that is already labeled.
To train your data you need a sets of relevant data similar but not identical to your testing data. For example, you could split up your data where 0.7 of your data is training and the rest testing. This will allow your algorithm to get a feel for what it should be looking for. The rest of the data 0.3 can be used for testing as it is a distinct set of information (hopefully) which should allow the algorithm to test itself.
Why split it up?
Well if you train your data on data A and then test your algorithm on data A your algorithm will be able to identify all the information correctly because that is what it was trained on.
For example, if when learning addition you were given the sums 3+4, 4+5, 6+9, which you correctly solved it would be redundant to test your knowledge of addition using the same sums.
further information:
http://en.wikipedia.org/wiki/Natural_language_processing
http://www.nltk.org/book
Hope this helps.