GAMLSS mixed model - mixed-models

I have some repeated measures data structured similar to the example Ovary data provided in the GAMLSS package. I would like to utilize the random effects syntax in my GAMLSS model as well as a smoother term for the fixed effect of time. I am having trouble extracting and plotting the fixed effect component only.
Consider the following model.
libary(gamlss)
library(ggplot2)
data(Ovary)
m1 <- gamlss(follicles~pb(Time) + re(random=~1+Time|Mare), data=Ovary)
The following extracts the fitted values for the model.
Ovary$fit<-fitted(m1)
I am interested in extracting and plotting the fixed effect component, however I am having some difficulty identifying how to do this exactly. The following calculates the partial effect for time.
pef <- getPEF(obj=m1,term="Time")
time<-seq(0,1,by=.05)
fit<-pef(time)
mydata<-data.frame(time=time,fit=fit)
And this is the plot of the raw data, fitted values, and partial effect.
ggplot(data=Ovary,aes(x=Time,y=follicles,color=Mare,group=Mare)) + geom_line() + geom_point() + geom_line(data=Ovary, inherit.aes=FALSE,aes(x=Time,y=fit,group=Mare)) + geom_point(color="black") + geom_line(inherit.aes=FALSE,data=mydata,aes(x=time,y=fit),color="red",size=3)
The thick red line, which represents the partial effect for time, does not seem quite right to me based on the underlying fitted values.
So my questions are:
Could someone direct me to identifying how to calculate fitted values for the fixed effect component only in GAMLSS, if this achievable. This would be more straightforward without the smooth for the fixed effect, but my real data has a nonlinear relation with time that isn't fit well by, for example, having the fixed component be a polynomial.
I would like to do this using GAMLSS because the dependent variable I am actually using has a beta distribution (which GAMLSS implements) with a decreasing variance over time (which GAMLSS allows me to model explicitly) and for another component of this project I am using this same dataset to calculate centiles (which GAMLSS can do, sans the random effect), but I am open to alternative suggestions.

Related

About argument of PCA function in Matlab

I have a 115*8000 data where 115 is the number of features. When I use pca function of matlab like this
[coeff,score,latent,tsquared,explained,mu] = pca(data);
on my data. I get some values. I read on here that how can I reduce my data but one thing confuses me. The explained data shows how much a feature weighs on calculation but do features get reorganized in this proces or features are exactly in same order as I give it to function?
Also I give 115 features but explained shows 114. Why does it happen?
The data is not "reorganized" in PCA, is transformed to a new space. When you crop the PCA space, that is your data, but you are not going to be able to visualize/understand it there, you need to convert it back to "normal" space, using eigenvectors and such.
explained gives you 114 because you now what is the answer with 115! 100% of the data can be explained with the whole data!
Read about it further in this answer: Significance of 99% of variance covered by the first component in PCA
PCA does not "choose" some of your features and remove the rest.
So you should not still be thinking about the original features after running PCA.
It is well-explained here on Wikipedia. You are converting your samples from the space defined by your original features to a space where features are linearly uncorrelated and called "principal components". Note: these components are no longer the original features.
An example of this in 2D could be: you have a vector z=(2,3) defined in your Euclidean space. It needs 2 features (the x and the y). If we change the space and define it using the coordinate vectors v=(2,3) and w an orthogonal vector to v, then z=(1,0) i.e. z=1.v+0.w and can now be represented with only 1 feature (the first coordinate!).
The link that you shared explains exactly (in the selected answer) how you can go about using the outputs of the pca function to reduce your dimensionality.
(As noted by Ander you do not care about the last components since these are the weakest anyway and you want to drop them)

Significance of 99% of variance covered by the first component in PCA

What does it mean/signify when the first component covers for more than 99% of the total variance in PCA analysis ?
I have a feature vector of size 500X1000 on which I used Matlab's pca function which returns [coeff,score,latent,tsquared,explained]. The variable 'explained' returns the percentage of variance covered by each component.
The explained tells you how accurately you could represent the data by just using that principal component. In your case it means that just using the main principal component, you can describe very accurately (to a 99%) the data.
Lets make a 2D example. Imagine you have data that is 100x2 and you do PCA.
the result could be something like this (taken from the internets)
This data will give you an explained value for the first principal component (PCA 1st dimension big green arrow in the figure) of around 90%.
What does it means?
It means that if you project all your data to that line, you will reconstruct the points with 90% of accuracy (of course, you will loose the information in the PCA 2nd dimension direction).
In your example, with 99% it visually means that almost all the points in blue are laying on the big green arrow, with very little variation in the small green arrow direction.
Of course it is way more difficult to visualize with 1000 dimensions instead of 2, but I hope you understand.

Decision Level Fusion of SVR outputs

I have two sets of features predicting the same outputs. But instead of training everything at once, I would like to train them separately and fuse the decisions. In SVM classification, we can take the probability values for the classes which can be used to train another SVM. But in SVR, how can we do this?
Any ideas?
Thanks :)
There are a couple of choices here . The two most popular ones would be:
ONE)
Build the two models and simply average the results.
It tends to work well in practice.
TWO)
You could do it in a very similar fashion as when you have probabilities. The problem is, you need to control for over fitting .What I mean is that it is "dangerous" to produce a score with one set of features and apply to another where the labels are exactly the same as before (even if the new features are different). This is because the new applied score was trained on these labels and therefore over fits in it (hyper-performs).
Normally you use a Cross-validation
In your case you have
train_set_1 with X1 features and label Y
train_set_2 with X2 features and same label Y
Some psedo code:
randomly split 50-50 both train_set_1 and train_set_2 at exactly the same points along with the Y (output array)
so now you have:
a.train_set_1 (50% of training_set_1)
b.train_set_1 (the rest of 50% of training_set_1)
a.train_set_2 (50% of training_set_2)
b.train_set_2 (the rest of 50% of training_set_2)
a.Y (50% of the output array that corresponds to the same sets as a.train_set_1 and a.train_set_2)
b.Y (50% of the output array that corresponds to the same sets as b.train_set_1 and b.train_set_2)
here is the key part
Build a svr with a.train_set_1 (that contains X1 features) and output a.Y and
Apply that model's prediction as a feature to b.train_set_2 .
By this I mean, you score the b.train_set_2 base on your first model. Then you take this score and paste it next to your a.train_set_2 .So now this set will have X2 features + 1 more feature, the score produced by the first model.
Then build your final model on b.train_set_2 and b.Y
The new model , although uses the score produced from training_set_1, still it does so in an unbiased way , since the later was never trained on these labels!
You might also find this paper quite useful

finding the global maximum of an unknown surface

I have a model that is solved, returns a single output value and plots it. From those values, I plot a surface using x-values varying from 1-35 and y-values varying from 1-39, and have the returned values as values on the z-axis. See below.
This figure does not behave according to a defined function, it is simply a plot of output values.
I've been trying to use a random optimization algorithm that I created in an attempt to find the global maximum, but it takes a very long time and isn't always correct (when compared to a grid-search algorithm that I use as a comparison). The surface that is created has subtle changes in it, enough to create multiple troublesome local minima and maxima. I'm looking for a way to find the global maximum of this non-convex surface in a relatively quick fashion.
EDIT:
35-by-39 is the search area and that's as big as it gets. The values of the x and y axes are the input values of the model (probably shouldve mentioned that), so each of the z-values are associated with an x and y input coordinate. And my initial guess is usually smack dab in the middle of the search area.
The creation of this figure took about 50 minutes, because each of the 1365 z-values takes about 3 seconds to compute. I'd like to do this without having to use exhaustive enumeration (evaluating every point for a z-value). I'd like this to take around 5 minutes instead of 50.
EDIT(2):
Sorry for the confusion. The figure below is a 35-by-39 grid of z-values and is used purely for reference. In the actual executing of the program, all I have is the x- and y-coordinates, and I am trying to find the global maximum z-value in the fewest function evaluations possible in order to save time. So horchler, in reference to your comment, the latter.
EDIT(3):
The thing with this figure is its only a single example. There are multiple different figures that are formed when I use data from a separate source (i.e. the left side might be uninteresting in this example, but for a separate set of data, it may or may not contain the global max). And this adds to the complexity. It is impossible to tell from the data where the location of the global max will be.
Some surfaces are incredibly smooth, others have large and frequent peaks throughout.

Process for comparing two datasets

I have two datasets at the time (in the form of vectors) and I plot them on the same axis to see how they relate with each other, and I specifically note and look for places where both graphs have a similar shape (i.e places where both have seemingly positive/negative gradient at approximately the same intervals). Example:
So far I have been working through the data graphically but realize that since the amount of the data is so large plotting each time I want to check how two sets correlate graphically it will take far too much time.
Are there any ideas, scripts or functions that might be useful in order to automize this process somewhat?
The first thing you have to think about is the nature of the criteria you want to apply to establish the similarity. There is a wide variety of ways to measure similarity and the more precisely you can describe what you want for "similar" to mean in your problem the easiest it will be to implement it regardless of the programming language.
Having said that, here is some of the thing you could look at :
correlation of the two datasets
difference of the derivative of the datasets (but I don't think it would be robust enough)
spectral analysis as mentionned by #thron of three
etc. ...
Knowing the origin of the datasets and their variability can also help a lot in formulating robust enough algorithms.
Sure. Call your two vectors A and B.
1) (Optional) Smooth your data either with a simple averaging filter (Matlab 'smooth'), or the 'filter' command. This will get rid of local changes in velocity ("gradient") that appear to be essentially noise (as in the ascending component of the red trace.
2) Differentiate both A and B. Now you are directly representing the velocity of each vector (Matlab 'diff').
3) Add the two differentiated vectors together (element-wise). Call this C.
4) Look for all points in C whose absolute value is above a certain threshold (you'll have to eyeball the data to get a good idea of what this should be). Points above this threshold indicate highly similar velocity.
5) Now look for where a high positive value in C is followed by a high negative value, or vice versa. In between these two points you will have similar curves in A and B.
Note: a) You could do the smoothing after step 3 rather than after step 1. b) Re 5), you could have a situation in which a 'hill' in your data is at the edge of the vector and so is 'cut in half', and the vectors descend to baseline before ascending in the next hill. Then 5) would misidentify the hill as coming between the initial descent and subsequent ascent. To avoid this, you could also require that the points in A and B in between the two points of velocity similarity have high absolute values.