I have the following variables:
Psychological trait data collected at pre- and post-intervention
Fitness data (e.g., weight in kg), collected at pre- and post-intervention
I am interested in seeing whether psychological trait at baseline (pre-intervention) explains change (e.g., weight loss) in the fitness from pre- to post-.
Is ANCOVA okay for this? The way I have it set up is:
Dependent: Fitness post- (continuous)
Independent: psychological trait pre- (continuous)
Covariate: Fitness pre- (continuous)
My concern is that my independent variable (psychological trait pre-) is continuous, not categorical. Is it okay to proceed with this ANCOVA, or do I need to go with a different analysis method (that allows for testing a continuous independent variable's effect on change observed between two time points in the dependent variable)?
UPDATE:
Actually, I'm wondering if it's just better to go with a linear regression model and add baseline (pre-intervention) as a covariate.
ANCOVA is a term generally used when you have a categorical factor and a continuous covariate (see e.g. Sokal R.R., & Rohlf J.F. 1995. Biometry. Macmillan). If you have two continuous covariates then the "ANCOVA" is generally called a multiple regression model or linear model but it is mostly a different name (software should give you the same result).
Related
I have an imbalanced data set. My goal is to balance sensitivity and specificity via the confusion matrix. I used glmnet in r with class weights. The model does well at balancing the sensitivity/specificity, but I looked at the calibration plot, and the probabilities are not well calibrated. I have read about calibrating probabilities, but I am wondering if it matters if my goal is to produce class predictions. If it does matter, I have not found a way to calibrate the probabilities when using caret::train().
This topic has been widely discussed, especially in some answers by Stephan Kolassa. I will try to summarize the main take-home messages for your specific question.
From a pure statistical point of view your interest should be on producing as output a probability for each class of any new data instance. As you deal with unbalanced data such probabilities can be small which however - as long as they are correct - is not an issue. Of course, some models can give you poor estimates of the class probabilities. In such cases, the calibration allows you to better calibrate the probabilities obtained from a given model. This means that whenever you estimate for a new observation a probability p of belonging to the target class, then p is indeed its true probability to be of that class.
If you are able to obtain a good probability estimator, then balancing sensitivity or specificity is not part of the statistical part of your problem, but rather of the decision component. Such the final decision will likely need to use some kind of threshold. Depending on the costs of type I and II errors, the cost-optimal threshold might change; however, an optimal decision might also include more than one threshold.
Ultimately, you really have to be careful about which is the specific need of the end-user of your model, because this is what is going to determine the best way of taking decisions using it.
I have run ANN in matlab for prediction a variable based on several response variables.ALL variables have numerical values.I could not get a desirable results although I changed hidden neuron several times many runs of the model and so on.My question is should I use transformation of the input variables to get a better results?how can I know that which transformation I should choos?Thanks for any help.
I strongly advise you to use some methods from time series analysis like lagged correlation or window lagged correlation (with statistical tests). You can find it in most of statistical packages (e.g. in R). From one small picture it's hard to deduce whether your prediction is lagged or not. Testing huge amount of data can help you in revealing true dependencies and avoid trusting in spurious correlations.
I am attempting to train a neural network to control a simple entity in a simulated 2D environment, currently by using a genetic algorithm.
Perhaps due to lack of familiarity with the correct terms, my searches have not yielded much information on how to treat fitness and training in cases where all the following conditions hold:
There is no data available on correct outputs for given inputs.
A performance evaluation can only be made after an extended period of interaction with the environment (with continuous controller input/output invocation).
There is randomness inherent in the system.
Currently my approach is as follows:
The NN inputs are instantaneous sensor readings of the entity and environment state.
The outputs are instantaneous activation levels of its effectors, for example, a level of thrust for an actuator.
I generate a performance value by running the simulation for a given NN controller, either for a preset period of simulation time, or until some system state is reached. The performance value is then assigned as appropriate based on observations of behaviour/final state.
To prevent over-fitting, I repeat the above a number of times with different random generator seeds for the system, and assign a fitness using some metric such as average/lowest performance value.
This is done for every individual at every generation. Within a given generation, for fairness each individual will use the same set of random seeds.
I have a couple of questions.
Is this a reasonable, standard approach to take for such a problem? Unsurprisingly it all adds up to a very computationally expensive process. I'm wondering if there are any methods to avoid having to rerun a simulation from scratch every time I produce a fitness value.
As stated, the same set of random seeds is used for the simulations for each individual in a generation. From one generation to the next, should this set remain static, or should it be different? My instinct was to use different seeds each generation to further avoid over-fitting, and that doing so would not have an adverse effect on the selective force. However, from my results, I'm unsure about this.
It is a reasonable approach, but genetic algorithms are not known for being very fast/efficient. Try hillclimbing and see if that is any faster. There are numerous other optimization methods, but nothing is great if you assume the function is a black box that you can only sample from. Reinforcement learning might work.
Using random seeds should prevent overfitting, but may not be necessary depending on how representative a static test is of average, and how easy it is to overfit.
After we created a Naive Bayes classifier object nb (say, with multivariate multinomial (mvmn) distribution), we can call posterior function on testing data using nb object. This function has 3 output parameters:
[post,cpre,logp] = posterior(nb,test)
I understand how post is computed and the meaning of that, also cpre is the predicted class, based on the maximum over posterior probabilities for each class.
The question is about logp. It is clear how it is computed (logarithm of the PDF of each pattern in test), but I don't understand the meaning of this measure and how it can be used in the context of Naive Bayes procedure. Any light on this is very much appreciated.
Thanks.
The logp you are referring to is the log likelihood, which is one way to measure how good a model is fitting. We use log probabilities to prevent computers from underflowing on very small floating-point numbers, and also because adding is faster than multiplying.
If you learned your classifier several times with different starting points, you would get different results because the likelihood function is not log-concave, meaning there are local maxima that you would get stuck in. If you computed the likelihood of the posterior on your original data you would get the likelihood of the model. Although the likelihood gives you a good measure of how one set of parameters fits compared to another, you need to be careful that you're not overfitting.
In your case, you are computing the likelihood on some unobserved (test) data, which gives you an idea of how well your learned classifier is fitting on the data. If you were trying to learn this model based on the test set, you would pick the parameters based on the highest test likelihood; however in general when you're doing this it's better to use a validation set. What you are doing here is computing predictive likelihood.
Computing the log likelihood is not limited to Naive Bayes classifiers and can in fact be computed for any Bayesian model (gaussian mixture, latent dirichlet allocation, etc).
I am trying to solve classification problem using Matlab GPTIPS framework.
I managed to build reasonable data representation and fitness function so far and got an average accuracy per class near 65%.
What I need now is some help with two difficulties:
My data is biased. Basically I am solving binary classification problem and only 20% of data belongs to class 1, while other 80% belong to class 0. I used accuracy of prediction as my fitness function at first, but it was really bad. The best I have now is
Fitness = 0.5*(PositivePredictiveValue + NegativePredictiveValue) - const*ComplexityOfSolution
Please, advize, how can I improve my function to make correction for data bias.
Second problem is overfitting. I divided my data into three parts: training (70%), testing (20%), validation (10%). I train each chromosome on training set, then evaluate it's fitness function on testing set. This routine allows me to reach fitness of 0.82 on my test data for the best individual in population. But same individual's result on validation data is only 60%.
I added validation check for best individual each time before new population is generated. Then I compare fitness on validation set with fitness on test set. If difference is more then 5%, then I increase penalty for solution complexity in my fitness function. But it didn't help.
I could also try to evaluate all individuals with validation set during each generation, and simply remove overfitted ones. But then I don't see any difference between my test and validation data. What else can be done here?
UPDATE:
For my second question I've found great article "Experiments on Controlling Overtting
in Genetic Programming" Along with some article authors' ideas on dealing with overfitting in GP it has impressive review with a lot of references to many different approaches to the issue. Now I have a lot of new ideas I can try for my problem.
Unfortunately, still cant' find anything on selecting a proper fitness function which will take into account unbalanced class proportions in my data.
65% accuracy is very bad when the baseline (classify everything as the class with most samples) would be 80%. You need to achieve at least baseline classification in order to have a better model than the naive one.
I would not penalize complexity. Rather limit the tree size (if possible). You could identify simpler models during the run, like storing a pareto front of models with quality and complexity as its two fitness values.
In HeuristicLab we have integrated GP based classification that can do these things. There are several options: You can choose to use MSE for classification or R2. In the latest trunk build there is also an evaluator to optimize accuracy directly (exactly speaking it optimizes the classification penalties). Optimizing MSE means it assigns each class a value (1, 2, 3,...) and tries to minimize mean squared error from that value. This may not seem optimal at first, but works. Optimizing accuracy directly may lead to faster overfitting. There is also a formula simplifier which allows you to prune and shrink your formula (and view the effects of that).
Also, does it need to be GP? Have you tried Random Forest Classification or Support Vector Machines as well? RF are pretty fast and work pretty well usually.