residual standard deviation and mean absolute difference - linear-regression

I study the effect of a drug on the variability of a continuous dependent variable. The study includes two groups, one of them receives the drug. The dependent variable is repeatedly measured 6 times during the study. The variability is assessed by residual standard deviation and mean absolute difference.
Any idea how to perform the analysis in SPSS?
I study the effect of a drug on the variability of a continuous dependent variable. The study includes two groups, one of them receives the drug. The dependent variable is repeatedly measured 6 times during the study. The variability is assessed by residual standard deviation and mean absolute difference.
Any idea how to perform the analysis in SPSS?

Related

How do I determine the number of runs I need to carry out in a netlogo to reduce the influence of randomization?

enter image description here
Box plot
I have plotted an output variable as a box plot for the number of runs. But I fail to provide argumentation on which should be the optimum amount of runs that should be carried out.
If each simulation is considered one observation in a sample, your sample size (number of simulations) should be large enough so that estimate of the parameter of interest approaches the true population value for the model (See Cowled, B.D., Garner, M.G., Negus, K., Ward, M.P., 2012. Controlling disease outbreaks in wildlife using limited culling: modelling classical swine fever incursions in wild pigs in Australia. Vet. Res. 43, 3).
This is what Cowled et al. did: "To estimate our sample size, we calculated the mean of the parameter-of-interest (after each simulation). We then determined the coefficient of variation of this mean. At the point when the coefficient of variation was less than 15% for 30 consecutive simulations we considered that convergence had occurred and that
this number of simulations was adequate to estimate the parameter with precision."
I have used a similar approach to calculate the required number of model simulations: Belsare, A.V. and Gompper, M.E. 2015. A model-based approach for investigation and mitigation of disease spillover risks to wildlife: dogs, foxes and canine distemper in central India. Ecological Modelling 296, 102-112.

how to compare two hyper parameters in a hierarchical model?

In one hierarchical model, we have two hyer parameters: dnorm(A_mu, 0.25^-2) and dnorm (B_mu, 0.25^-2). In this case, 0.25 is the sd, I use the fixed number. A_mu and B_mu represent the mean of group level. After fitting the data by rjags, we get the distributions for each parameter. So I just directly compare the highest posterior density interval (HDI) of A_mu and B_mu? Do I need to calculate something using the sd(0.25)?
In another case, if the sd of two hyper parameters is not fixed, like that: dnorm(A_mu, A_sd) and dnorm (B_mu, B_sd). How can I compare the two hyper parameters and make a decision, e.g. this group is significantly different another group?
Remember that you are getting posterior distributions for A_mu and B_mu. This makes your comparison easy as you can have a look at 95% confidence intervals (CI) for the parameters (or pick a confidence value that satisfies your needs). I believe JAGS uses Gibbs sampling and so you should be able to get the raw samples from the posteriors for A_mu and B_mu. You can then ask "what is the probability that B_mu is greater than some value?" by calculating the percentage of posterior samples that are greater than that value. Alternatively, and in a similar way to frequentist Hypothesis testing, you can ask what is the probability that the mean of B_mu is a draw from the posterior of A_mu. So the key is just to directly use the samples from your posterior. I would recommend taking a look at Andrew Gelman's BDA3 textbook (Chapter 4) for a really good reference on these concepts.
A few things to keep in mind before drawing conclusions from the data: (1) you should always check the validity of your Markov Chains by evaluating things like autocorrelation (2) try to do a posterior predictive check to make sure your model is well fit to the data. If your model is poorly fit to the data then you can get very misleading results from the procedure above.

Appropriate method for clustering ordinal variables

I was reading through all (or most) previously asked questions, but couldn't find an answer to my problem...
I have 13 variables measured on an ordinal scale (thy represent knowledge transfer channels), which I want to cluster (HCA) for a following binary logistic regression analysis (including all 13 variables is not possible due to sample size of N=208). A Factor Analysis seems inappropriate due to the scale level. I am using SPSS (but tried R as well).
Questions:
1: Am I right in using the Chi-Squared measure for count data instead of the (squared) euclidian distance?
2. How can I justify a choice of method? I tried single, complete, Ward and average, but all give different results and I can't find a source to base my decision on.
Thanks a lot in advance!
Answer 1: Since the variables are on ordinal scale, the chi-square test is an appropriate measurement test. Because, "A Chi-square test is designed to analyze categorical data. That means that the data has been counted and divided into categories. It will not work with parametric or continuous data (such as height in inches)." Reference.
Again, ordinal scaled data is essentially count or frequency data you can use regular parametric statistics: mean, standard deviation, etc Or non-parametric tests like ANOVA or Mann-Whitney U test to compare 2 groups or Kruskal–Wallis H test to compare three or more groups.
Answer 2: In a clustering problem, the choice of distance method solely depends upon the type of variables. I recommend you to read these detailed posts 1, 2,3

Pearson correlation fails for perfectly correlated sets

Consider the following examples of the Pearson correlation coefficient on sets of film ratings by users A and B:
A = [2,4,4,4,4]
B = [5,4,4,4,4]
pearson(A,B) = -1
A = [5,5,5,5,5]
B = [5,5,5,5,5]
pearson(A,B) = NaN
Pearson correlation seems widely used for calculating the similarity between two sets in collaborative filtering. However the sets above show high (even perfect) similarity, yet the outputs suggest the sets are negatively correlated (or an error is encountered due to div by zero).
I initially thought it was an issue in my implementation, but I've since validated it against a few online calculators.
If the outputs are correct, why is Pearson correlation considered a good choice for this application?
Person correlation measures association between two data sets i.e. how do they increase or decrease together.
In visual terms,how close do they lie on a straight line if one set is plotted on x-axis, and other on y-axis.
Example of positive correlation, irrespective of difference in scale of data sets:
For your case, the data sets are exactly similar, and hence their standard deviation is zero, which is a part of the product used in the denominator in pearson correlation calculation, hence it is undefined.
It means, it is not possible to predict the correlation i.e. how does the data increase or decrease along with other data.
In graph below, all data points lie on one point, hence predicting
the correlation pattern is not possible.
A very simple solution to this would be handle these cases seperately,
or if you want to go through the same flow, a neat hack would be to
make sure that standard deviation of any set is not zero.
Non zero standard deviation can be achieved by altering a single value of the set, with a minor amount, and since the data sets are highly correlated, it would give you the high correlation coefficient.
I would recommend that you study other measures of similarity like Euclidean distance, cosine similarity, adjusted cosine similarity too, and take informed decision on which suits your use cases more. It may be a hybrid approach too.
This tool was used to generate the graphs.
Pearson correlation divides by the standard deviation of the variables, which in your case is zero, therefore causing a divide by zero error. It is considered good because no real data set has a standard deviation of zero. In other words, complete uniform data sets are out of domain for the Pearson correlation coefficient, but that's no reason not to use it.

Why does classifier accuracy drop after PCA, even though 99% of the total variance is covered?

I have a 500x1000 feature vector and principal component analysis says that over 99% of total variance is covered by the first component. So I replace 1000 dimension point by 1 dimension point giving 500x1 feature vector(using Matlab's pca function). But, my classifier accuracy which was initially around 80% with 1000 features now drops to 30% with 1 feature even though more than 99% of the variance is accounted by this feature. What could be the explanation to this or are my methods wrong?
(This question partly arises from my earlier question Significance of 99% of variance covered by the first component in PCA)
Edit:
I used weka's principal components method to perform the dimensionality reduction and support vector machines(SVM) classifier.
Principal Components do not necessarily have any correlation to classification accuracy. There could be a 2-variable situation where 99% of the variance corresponds to the first PC but that PC has no relation to the underlying classes in the data. Whereas the second PC (which only contributes to 1% of the variance) is the one that can separate the classes. If you only keep the first PC, then you lose the feature that actually provides the ability to classify the data.
In practice, smaller (lower variance) PCs often are associated with noise so there can be benefit in removing them but there is no guarantee of this.
Consider a case where you have two variables: a person's mass (in grams) and body temperature (in degrees Celsius). You want to predict which people have the flu and which do not. In this case, weight has a much greater variance but probably no correlation to the flu, whereas temperature, which has low variance, has a strong correlation to the flu. After the Principal Components transformation, the first PC will be strongly aligned with mass (since it has much greater variance) so if you dropped the second PC, would be losing almost all of your classification accuracy.
It is important to remember that Principal Components is an unsupervised transformation of the data. It does not consider labels of your training data when calculating the transformation (as opposed to something like Fisher's linear discriminant).