Cox regression with visreg2d - visualization

I have two continous covariates and survival time/event.
I am trying to visualize the Hazards of mortality by the 2 covariates simultaneously.
This is my R code:
fit <- coxph(Surv(time, death==1) ~ pspline(A)+pspline(B), data=data)
visreg2d(fit, "A", "B")
I get this result image:
As you can see, the Z axis is giving me log(hazards) which is hard to interpret. Is there a way to get the Hazards (i.e. e^(log Hazard))?
Thanks

Related

How to run an exponential decay mixed model?

I am not familiar with nonlinear regression and would appreciate some help with running an exponential decay model in R. Please see the graph for how the data looks like. My hunch is that an exponential model might be a good choice. I have one fixed effect and one random effect. y ~ x + (1|random factor). How to get the starting values for the exponential model (please assume that I know nothing about nonlinear regression) in R? How do I subsequently run a nonlinear model with these starting values? Could anyone please help me with the logic as well as the R code?
As I am not familiar with nonlinear regression, I haven't been able to attempt it in R.
raw plot
The correct syntax will depend on your experimental design and model but I hope to give you a general idea on how to get started.
We begin by generating some data that should match the type of data you are working with. You had mentioned a fixed factor and a random one. Here, the fixed factor is represented by the variable treatment and the random factor is represented by the variable grouping_factor.
library(nlraa)
library(nlme)
library(ggplot2)
## Setting this seed should allow you to reach the same result as me
set.seed(3232333)
example_data <- expand.grid(treatment = c("A", "B"),
grouping_factor = c('1', '2', '3'),
replication = c(1, 2, 3),
xvar = 1:15)
The next step is to create some "observations". Here, we use an exponential function y=a∗exp(c∗x) and some random noise to create some data. Also, we add a constant to treatment A just to create some treatment differences.
example_data$y <- ave(example_data$xvar, example_data[, c('treatment', 'replication', 'grouping_factor')],
FUN = function(x) {expf(x = x,
a = 10,
c = -0.3) + rnorm(1, 0, 0.6)})
example_data$y[example_data$treatment == 'A'] <- example_data$y[example_data$treatment == 'A'] + 0.8
All right, now we start fitting the model.
## Create a grouped data frame
exampleG <- groupedData(y ~ xvar|grouping_factor, data = example_data)
## Fit a separate model to each groupped level
fitL <- nlsList(y ~ SSexpf(xvar, a, c), data = exampleG)
## Grab the coefficients of the general model
fxf <- fixed.effects(fit1)
## Add treatment as a fixed effect. Also, use the coeffients from the previous
## regression model as starting values.
fit2 <- update(fit1, fixed = a + c ~ treatment,
start = c(fxf[1], 0,
fxf[2], 0))
Looking at the model output, it will give you information like the following:
Nonlinear mixed-effects model fit by maximum likelihood
Model: y ~ SSexpf(xvar, a, c)
Data: exampleG
AIC BIC logLik
475.8632 504.6506 -229.9316
Random effects:
Formula: list(a ~ 1, c ~ 1)
Level: grouping_factor
Structure: General positive-definite, Log-Cholesky parametrization
StdDev Corr
a.(Intercept) 3.254827e-04 a.(In)
c.(Intercept) 1.248580e-06 0
Residual 5.670317e-01
Fixed effects: a + c ~ treatment
Value Std.Error DF t-value p-value
a.(Intercept) 9.634383 0.2189967 264 43.99329 0.0000
a.treatmentB 0.353342 0.3621573 264 0.97566 0.3301
c.(Intercept) -0.204848 0.0060642 264 -33.77976 0.0000
c.treatmentB -0.092138 0.0120463 264 -7.64867 0.0000
Correlation:
a.(In) a.trtB c.(In)
a.treatmentB -0.605
c.(Intercept) -0.785 0.475
c.treatmentB 0.395 -0.792 -0.503
Standardized Within-Group Residuals:
Min Q1 Med Q3 Max
-1.93208903 -0.34340037 0.04767133 0.78924247 1.95516431
Number of Observations: 270
Number of Groups: 3
Then, if you wanted to visualize the model fit, you could do the following.
## Here we store the model predictions for visualization purposes
predictionsDf <- cbind(example_data,
predict_nlme(fit2, interval = 'conf'))
## Here we make a graph to check it out
ggplot()+
geom_ribbon(data = predictionsDf,
aes( x = xvar , ymin = Q2.5, ymax = Q97.5, fill = treatment),
color = NA, alpha = 0.3)+
geom_point(data = example_data, aes( x = xvar, y = y, col = treatment))+
geom_line(data = predictionsDf, aes(x = xvar, y = Estimate, col = treatment), size = 1.1)
This shows the model fit.

Fitting a sine wave with Keras and PYMC3 yields unexpected results

I've been trying to fit a sine curve with a keras (theano backend) model using pymc3. I've been using this [http://twiecki.github.io/blog/2016/07/05/bayesian-deep-learning/] as a reference point.
A Keras implementation alone fit using optimization does a good job, however Hamiltonian Monte Carlo and Variational sampling from pymc3 is not fitting the data. The trace is stuck at where the prior is initiated. When I move the prior the posterior moves to the same spot. The posterior predictive of the bayesian model in cell 59 is barely getting the sine wave, whereas the non-bayesian fit model gets it near perfect in cell 63. I created a notebook here: https://gist.github.com/tomc4yt/d2fb694247984b1f8e89cfd80aff8706 which shows the code and the results.
Here is a snippet of the model below...
class GaussWeights(object):
def __init__(self):
self.count = 0
def __call__(self, shape, name='w'):
return pm.Normal(
name, mu=0, sd=.1,
testval=np.random.normal(size=shape).astype(np.float32),
shape=shape)
def build_ann(x, y, init):
with pm.Model() as m:
i = Input(tensor=x, shape=x.get_value().shape[1:])
m = i
m = Dense(4, init=init, activation='tanh')(m)
m = Dense(1, init=init, activation='tanh')(m)
sigma = pm.Normal('sigma', 0, 1, transform=None)
out = pm.Normal('out',
m, 1,
observed=y, transform=None)
return out
with pm.Model() as neural_network:
likelihood = build_ann(input_var, target_var, GaussWeights())
# v_params = pm.variational.advi(
# n=300, learning_rate=.4
# )
# trace = pm.variational.sample_vp(v_params, draws=2000)
start = pm.find_MAP(fmin=scipy.optimize.fmin_powell)
step = pm.HamiltonianMC(scaling=start)
trace = pm.sample(1000, step, progressbar=True)
The model contains normal noise with a fixed std of 1:
out = pm.Normal('out', m, 1, observed=y)
but the dataset does not. It is only natural that the predictive posterior does not match the dataset, they were generated in a very different way. To make it more realistic you could add noise to your dataset, and then estimate sigma:
mu = pm.Deterministic('mu', m)
sigma = pm.HalfCauchy('sigma', beta=1)
pm.Normal('y', mu=mu, sd=sigma, observed=y)
What you are doing right now is similar to taking the output from the network and adding standard normal noise.
A couple of unrelated comments:
out is not the likelihood, it is just the dataset again.
If you use HamiltonianMC instead of NUTS, you need to set the step size and the integration time yourself. The defaults are not usually useful.
Seems like keras changed in 2.0 and this way of combining pymc3 and keras does not seem to work anymore.

Bicoin price prediction using spark and scala [duplicate]

I am new to Apache Spark and trying to use the machine learning library to predict some data. My dataset right now is only about 350 points. Here are 7 of those points:
"365","4",41401.387,5330569
"364","3",51517.886,5946290
"363","2",55059.838,6097388
"362","1",43780.977,5304694
"361","7",46447.196,5471836
"360","6",50656.121,5849862
"359","5",44494.476,5460289
Here's my code:
def parsePoint(line):
split = map(sanitize, line.split(','))
rev = split.pop(-2)
return LabeledPoint(rev, split)
def sanitize(value):
return float(value.strip('"'))
parsedData = textFile.map(parsePoint)
model = LinearRegressionWithSGD.train(parsedData, iterations=10)
print model.predict(parsedData.first().features)
The prediction is something totally crazy, like -6.92840330273e+136. If I don't set iterations in train(), then I get nan as a result. What am I doing wrong? Is it my data set (the size of it, maybe?) or my configuration?
The problem is that LinearRegressionWithSGD uses stochastic gradient descent (SGD) to optimize the weight vector of your linear model. SGD is really sensitive to the provided stepSize which is used to update the intermediate solution.
What SGD does is to calculate the gradient g of the cost function given a sample of the input points and the current weights w. In order to update the weights w you go for a certain distance in the opposite direction of g. The distance is your step size s.
w(i+1) = w(i) - s * g
Since you're not providing an explicit step size value, MLlib assumes stepSize = 1. This seems to not work for your use case. I'd recommend you to try different step sizes, usually lower values, to see how LinearRegressionWithSGD behaves:
LinearRegressionWithSGD.train(parsedData, numIterartions = 10, stepSize = 0.001)

Optimization with Unknown Number of Variables

Since the original problem is more complicated, the idea is described using a simple example below.
For example, suppose we want to put several router antennas somewhere in a room so that the cellphone get most signal strength on the table (received power > Pmax) while weakest signal strength on bed (received power < Pmin). What is the best (minimum) number of antennas that should be used, and where should they be placed, in order to achieve the goal.
Mathematically,
SIGNAL_STRENGTH is dependent on variable (x, y, z) and the number
of variables
. i.e. location and number of antennas.
Besides, assume
PREDICTION = f((x1, y1, z1), (x2, y2, z2), ... (xi, yi, zi), ... (xn,
yn, zn))
where n and (xi, yi, zi) are to be optimized. The goal is to minimize
cost function = ||SIGNAL_STRENGTH - PREDICTION||
I tried to use GA with mixed integer programming in Matlab to implement that. Two optimization functions are used, outer function is to optimize n, and inner optimization function optimizes (x, y, z) with given n. This method works slow and I haven't seen one result given by this method so far.
Does anyone have a more efficient way to solve this problem? Any suggestion is appreciated. Thanks in advance.
Terminology | Problem Definition
An antenna is sending at position a in R^3 with constant power. Its signal strength can be measured by some S: R^3 -> R where S has a single maximum S_0 at a and the set, constructed by S(x) > const, is simply connected, i.e. S(x) = S_0 * exp(-const * (x-a)^2).
Given a set of antennas A the resulting signal strength is the maximum of a single antenna
S_A(x) = max{S_a(x) : for all a in A} ,
which means we 'lock' on the strongest antenna, which is what cell phones do.
Let K = R^3 x R denote a space of points (position, intensity). Now concider two finite subsets POI_min and POI_max of K. We want to find the set A with the minimal amount of antennas (|A| -> min.), that satisfies
for all (x,w) in POI_min : S_A(x) < w and for all (x,w) in POI_max : S_A(x) > w .
Implication
As S(x) > const is simply connected there has to be an antenna in a sphere around the position of each element (x,w) in POI_max with radius r = max{||xi - x|| : for all xi in S(xi) = w}. Which means that if we would put an antenna at the position of (x,w), then the furthest we can go away from x and still have signal strength w is the radius r within which an actual antenna has to be positioned.
With a similar argumentation for POI_min it follows that there is no antenna within r = min{||xi - x|| : for all xi in S(xi) = w}.
Solution
Instead of solving a nonlinear optimization task we can intersect spheres to obtain the optimal solution. If k spheres around the POI_max positions intersect, we can place a single antenna in the intersection, reducing the amount of antennas needed by k-1.
However each antenna that is placed must satisfy all constraints given by the elements of POI_min. Assuming that antennas are omnidirectional and thus orientation of an antenna doesn't matter we can do (pseudocode):
min_sphere = {(x_i,r_i) : from POI_min},
spheres_to_cover = {(x_i,r_i) : from POI_max}
A = {}
while not is_empty(spheres_to_cover)
power_set_score = struct // holds score, k
PS <- costruct power set of sphere_to_cover
for i = 1:number_of_elements(PS)
k = PS[i]
if intersection(k) \ min_sphere is not empty
power_set_score[i].score = |k|
else
power_set_score[i].score = 0
end if
power_set_score[i].k = k
end for
sort(power_set_score) // sort by score, biggest first
A <- add arbitrary point in (intersection(power_set_score[1].k) \ min_sphere)
spheres_to_cover = spheres_to_cover \ power_set_score[1].k
end while
On the other hand you have just given an example problem and thus this solution may not be applicable or broad enough for your case. I did make a few assumptions. So being more specific in the question might give you an even better answer.

Why CLARA clustering method does not give the same classes as when I do clustering manually?

I am using CLARA (in 'cluster' package). This method is supposed to assign each observation to the closest 'medoid'. But when I calculate the distance of medoids and observations manually and assign them manually, the results are slightly different (1-2 percent of occurrence probability). Does anyone know how clara calculates dissimilarities and why I get different clustering results?
This is the function I use to do clustering manually:
Manual.Clustering <- function(Data,Clusters,Weights=NULL) {
if (is.null(Weights)) Weights <- rep(1,length(Data));
if (length(Weights)==1) Weights <- rep(Weights,length(Data));
Data2 <- Data[,rownames(Clusters)];
Data2 <- Weights*Data2;
dist <- matrix(NA,nrow=nrow(Data),ncol=ncol(Clusters));
for (i in 1:ncol(Clusters)) {
dist[,i] <- Dist2Center(Data2,Clusters[,i],Weights=NULL);
}
classes <- apply(dist,1,which.min);
Out <- cbind(Data,classes);
colnames(Out) <- c(colnames(Data),"Class");
Freq <- FreqTable(Out[,"Class"]);
Freq <- as.data.frame(Freq);
return(list(Data=Out,Freq=Freq));
}
=====================================
Dist2Center <- function(Data,Center,Weights=NULL) {
if (is.null(Weights)) Weights <- matrix(rep(1,nrow(Data)),ncol=1);
if (length(Weights)==1) Weights <- rep(Weights,nrow(Data));
if (ncol(Data)!=length(Center)) stop();
Dist <- Weights*apply(Data,1,function(x){sqrt(sum((x-Center)^2,na.rm=T))} );
return(Dist);
}
Data: Original Data.
Clusters: t(Medoids).
Medoids: 'medoids' picked by clara.
Dist2Center: A function which calculates Euclidean distance of each observation from each Medoids.
Behnam.
I found that this happens only when input data has NA values. For inputs without NAs, the results of my algorithm and clara are identical. I think this is related to how clara handles NA values while calculating the distances of observations to medoids. Any comment? Any suggestion for replacing clara with a better algorithm compatible with large datasets and NA values?
Having a look at the Clara C code, I found that Clara manipulates the distances if there is any missing values. The line " dsum *= (nobs / pp) " in the code shows that it counts the number of non-missing values in each pair of observations (nobs), divides it by the number of variables (pp) and multiplies this by the sum of squares. That is why it does not give the same results as my algorithm.