How do I calculate Gradient of ranking loss? - neural-network

I am trying to understand ranking loss(a.k.a, Maximum Margin Objective Function, MarginRankingLoss ...) based on CS 224D: Deep Learning for NLP lecture note.
In this note, the cost is defined as follows: J = (1 + sc − s)
s= f(θ,x), sc = f(θ,xc),
x is the input of the correct, and xc is the input of the wrong.
So, s is score of good thing, sc is score of bad thing.
My question is this:
To update the weights, do I have to get ∂J/ ∂θ or ∂s/∂θ?
I thought I had to do ∂J / ∂θ to update θ.
Therefore, since J = 1 + sc-s, ∂J / ∂θ = ∂sc / ∂θ - ∂s / ∂θ.
So I thought that ∂sc / ∂θ and ∂s / ∂θ should be obtained, respectively.
In a lecture note, however, calculate ∂J / ∂s = -1 and use this value to update the network.
What am I doing wrong?

Related

How to run an exponential decay mixed model?

I am not familiar with nonlinear regression and would appreciate some help with running an exponential decay model in R. Please see the graph for how the data looks like. My hunch is that an exponential model might be a good choice. I have one fixed effect and one random effect. y ~ x + (1|random factor). How to get the starting values for the exponential model (please assume that I know nothing about nonlinear regression) in R? How do I subsequently run a nonlinear model with these starting values? Could anyone please help me with the logic as well as the R code?
As I am not familiar with nonlinear regression, I haven't been able to attempt it in R.
raw plot
The correct syntax will depend on your experimental design and model but I hope to give you a general idea on how to get started.
We begin by generating some data that should match the type of data you are working with. You had mentioned a fixed factor and a random one. Here, the fixed factor is represented by the variable treatment and the random factor is represented by the variable grouping_factor.
library(nlraa)
library(nlme)
library(ggplot2)
## Setting this seed should allow you to reach the same result as me
set.seed(3232333)
example_data <- expand.grid(treatment = c("A", "B"),
grouping_factor = c('1', '2', '3'),
replication = c(1, 2, 3),
xvar = 1:15)
The next step is to create some "observations". Here, we use an exponential function y=a∗exp(c∗x) and some random noise to create some data. Also, we add a constant to treatment A just to create some treatment differences.
example_data$y <- ave(example_data$xvar, example_data[, c('treatment', 'replication', 'grouping_factor')],
FUN = function(x) {expf(x = x,
a = 10,
c = -0.3) + rnorm(1, 0, 0.6)})
example_data$y[example_data$treatment == 'A'] <- example_data$y[example_data$treatment == 'A'] + 0.8
All right, now we start fitting the model.
## Create a grouped data frame
exampleG <- groupedData(y ~ xvar|grouping_factor, data = example_data)
## Fit a separate model to each groupped level
fitL <- nlsList(y ~ SSexpf(xvar, a, c), data = exampleG)
## Grab the coefficients of the general model
fxf <- fixed.effects(fit1)
## Add treatment as a fixed effect. Also, use the coeffients from the previous
## regression model as starting values.
fit2 <- update(fit1, fixed = a + c ~ treatment,
start = c(fxf[1], 0,
fxf[2], 0))
Looking at the model output, it will give you information like the following:
Nonlinear mixed-effects model fit by maximum likelihood
Model: y ~ SSexpf(xvar, a, c)
Data: exampleG
AIC BIC logLik
475.8632 504.6506 -229.9316
Random effects:
Formula: list(a ~ 1, c ~ 1)
Level: grouping_factor
Structure: General positive-definite, Log-Cholesky parametrization
StdDev Corr
a.(Intercept) 3.254827e-04 a.(In)
c.(Intercept) 1.248580e-06 0
Residual 5.670317e-01
Fixed effects: a + c ~ treatment
Value Std.Error DF t-value p-value
a.(Intercept) 9.634383 0.2189967 264 43.99329 0.0000
a.treatmentB 0.353342 0.3621573 264 0.97566 0.3301
c.(Intercept) -0.204848 0.0060642 264 -33.77976 0.0000
c.treatmentB -0.092138 0.0120463 264 -7.64867 0.0000
Correlation:
a.(In) a.trtB c.(In)
a.treatmentB -0.605
c.(Intercept) -0.785 0.475
c.treatmentB 0.395 -0.792 -0.503
Standardized Within-Group Residuals:
Min Q1 Med Q3 Max
-1.93208903 -0.34340037 0.04767133 0.78924247 1.95516431
Number of Observations: 270
Number of Groups: 3
Then, if you wanted to visualize the model fit, you could do the following.
## Here we store the model predictions for visualization purposes
predictionsDf <- cbind(example_data,
predict_nlme(fit2, interval = 'conf'))
## Here we make a graph to check it out
ggplot()+
geom_ribbon(data = predictionsDf,
aes( x = xvar , ymin = Q2.5, ymax = Q97.5, fill = treatment),
color = NA, alpha = 0.3)+
geom_point(data = example_data, aes( x = xvar, y = y, col = treatment))+
geom_line(data = predictionsDf, aes(x = xvar, y = Estimate, col = treatment), size = 1.1)
This shows the model fit.

Ruby version of gamma.fit from scipy.stats

As the title suggests, I am trying to find a function that can take an array of floats and find a distribution that fits my data.
From here I'll use it to find the CDF of new data I am passing it.
I have installed and looked through the sciruby Distribution and NArray docs but nothing appears to match the 'fit' method
The python code looks like this
# Approach 2: Model-based percentiles.
# Step 1: Find a Gamma distribution that fits your data
alpha, _, beta = stats.gamma.fit(data, floc = 0.)
# Step 2: Use that distribution's CDF to get percentiles.
scores = 100-100*stats.gamma.cdf(new_data, a = alpha, scale=beta)
print(scores)
Thank you in advance
After a deep dive into other packages and a lot of help from someone from the 'Cross Validated' forum, I have the answer needed.
In order to obtain the needed 'alpha' and 'beta' values that will give the shape and rate of the gamma distribution, you will need to discover what the 'variance' value is in the data.
There are a few approaches to achieving this. See here for more information;
https://www.statisticshowto.com/probability-and-statistics/descriptive-statistics/sample-variance/
Code examples;
data = [<insert your numbers>]
sum = data.sum
sum_square_mean = (sum**2) / data.size
all_square = data.map { |n| n**2 }.sum
net_square = all_square - sum_square_mean
minus_one = data.size - 1
variance = net_square / minus_one
mean = data.sum(0.0) / data.size
mean_squared = mean**2
alpha = mean_squared / variance
beta = mean / variance
theta = variance / mean
The line 'minus_one' isn't completely necessary but it's done in statistics to reduce the error rate. Look up Bessels correction. You can just get variance from net_square / data.size.
Second option using the 'descriptive_statistics' gem
require('descriptive_statistics')
# doesn't account for bessel's correction
#alpha = (data.mean**2) / data.variance
#beta = data.mean / data.variance
#theta = data.variance / data.mean
Once you have these values, you can use the cdf function from the Distribution Gem , docs here
The next stage is then to pass the values into this function which will return a percentile.
Make sure to use the '1 over beta' calculation or it won't work
percentile = 100 - (100 * Distribution::Gamma::Ruby_.cdf(x, alpha, 1 / beta))
You may have noticed I have also calculated #theta
This was for a separate function that means I can also return the value from my gamma distribution by passing in the percentile. Used like so
value = Distribution::Gamma.quantile(0.5, alpha, theta)
This function is also known as 'inverse cdf', 'inverse cumulative distribution function', 'probability point function' or 'percentile point function'. Here it is simply named 'quantile'.
For more information on gamma distributions, please see the wiki
Gamma Distribution

Iterative quantile estimation in Matlab

I'm trying to implement an interative algorithm to estimate quantiles in data that is generated from a Monte-Carlo simulation. I want to make it iterative, because I have many iterations and variables so storing all data points and using Matlab's quantile function would take much of the memory that I actually need for the simulation.
I found some approaches based on the Robbin-Monro process, given by
The implementation with a control sequence ct = c / t where c is constant is quite straight forward. In the cited paper, they show that c = 2 * sqrt(2 * pi) gives quite good results, at least for the median. But they also propose an adaptive approach based on an estimation of the histogram. Unfortunately, I haven't figured out how to implement this adaptation yet.
I tested the implementation with a constant c for three test samples with 10.000 data points. The value c = 2 * sqrt(2 * pi) did not work well for me, but c = 100 looks quite good for the test samples. However, this selction is not very robust and failed in the actual Monte-Carlo simulation giving results wide off the mark.
probabilities = [0.1, 0.4, 0.7];
controlFactor = 100;
quantile = zeros(size(probabilities));
indicator = zeros(size(probabilities));
for index = 1:length(data)
control = controlFactor / index;
indices = (data(index) >= quantile);
indicator(indices) = probabilities(indices);
indices = (data(index) < quantile);
indicator(indices) = probabilities(indices) - 1;
quantile = quantile + control * indicator;
end
Is there a more robust solution for iterative quantile estimation or does anyone have an implementation for an adaptive approach with small memory consumption?
After trying some of the adaptive iterative approaches that I found in literature without great success (not sure, if I did it right), I came up with a solution that gives me good results for my test samples and also for the actual Monte-Carlo-Simulation.
I buffer a subset of simulation results, compute the sample quantiles and average over all subset sample quantiles in the end. This seems to work quite well and without tuning many parameters. The only parameter is the buffer size which is 100 in my case.
The results converge quite fast and increasing sample size does not improve the results dramatically. There seems to be a small but constant bias that presumably is the averaged error of the subset sample quantiles. And that is the downside of my solution. By choosing the buffer size, one fixes the achievable accuracy. Increasing the buffer size reduces this bias. In the end, it seems to be a memory and accuracy tradeoff.
% Generate data
rng('default');
data = sqrt(0.5) * randn(10000, 1) + 5 * rand(10000, 1) + 10;
% Set parameters
probabilities = 0.2;
% Compute reference sample quantiles
quantileEstimation1 = quantile(data, probabilities);
% Estimate quantiles with computing the mean over a number of subset
% sample quantiles
subsetSize = 100;
quantileSum = 0;
for index = 1:length(data) / subsetSize;
quantileSum = quantileSum + quantile(data(((index - 1) * subsetSize + 1):(index * subsetSize)), probabilities);
end
quantileEstimation2 = quantileSum / (length(data) / subsetSize);
% Estimate quantiles with iterative computation
quantileEstimation3 = zeros(size(probabilities));
indicator = zeros(size(probabilities));
controlFactor = 2 * sqrt(2 * pi);
for index = 1:length(data)
control = controlFactor / index;
indices = (data(index) >= quantileEstimation3);
indicator(indices) = probabilities(indices);
indices = (data(index) < quantileEstimation3);
indicator(indices) = probabilities(indices) - 1;
quantileEstimation3 = quantileEstimation3 + control * indicator;
end
fprintf('Reference result: %f\nSubset result: %f\nIterative result: %f\n\n', quantileEstimation1, quantileEstimation2, quantileEstimation3);

MATLAB: How can I create autocorrelated data?

I'm looking to create a vector of autocorrelated data points in MATLAB, with the lag 1 higher than lag 2, and so on.
If I look at the lag 1 data pairs (1, 2), (3, 4), (5, 6), ..., then the correlation is relatively higher, but then at lag 2 it's reduced.
I found a way to do this in R
x <- filter(rnorm(1000), filter=rep(1,3), circular=TRUE)
However, I'm not sure how to do the same thing in MATLAB. Ideally I'd like to be able to fine tune exactly how autocorrelated the data is.
Math:
A group of standard models for autocorrelation in stationary time series are so called "auto regressive model" eg. an autoregressive model with 1 term is known as an AR(1) and is:
y_t = a + b*y_{t-1} + e_t
AR(1) sounds simplistic, but it turns it's a quite powerful tooll. Eg. an AR(p) with p autoregressive terms is actually an AR(1) on a p dimensional vector. (Check Wikipedia page.) Note also b=1, gives a non-stationary random walk.
A more intuitive way to write what's going on (in stationary case with |b| < 1) is define u = a / (1 - b) (turns out u is unconditional mean of AR(1)), then with some algebra:
y_t - u = b * ( y_{t-1} - u) + e_t
That is, the difference from the unconditional mean u gets hit with some decay term b and then a shock term e_t gets added. (you want -1<b<1 for stationarity)
Code:
Since e_t denotes the shock term, this is super easy to simulate. Eg. to simulate an AR(1):
a = 0; b = .4; sigma = 1; T = 1000;
y0 = a / (1 - b); %eg initialize to unconditional mean of stationary time series
y = zeros(T,1);
y(1) = a + b * y0 + randn() * sigma;
for t = 2:T
y(t) = a + b * y(t-1) + randn() * sigma;
end
This code isn't mean to be fast, but illustrative. An AR(1) model implies a certain type of correlation structure, but adding AR or MA terms, you can fit some pretty funky stuff. (MA is moving average model)
Can test sample autocorrelation with autocorr(y). For reference, the bible on time series mathematics is Hamilton's book Time Series Analysis.

Avoiding numerical overflow when calculating the value AND gradient of the Logistic loss function

I am currently trying to implement a machine learning algorithm that involves the logistic loss function in MATLAB. Unfortunately, I am having some trouble due to numerical overflow.
In general, for a given an input s, the value of the logistic function is:
log(1 + exp(s))
and the slope of the logistic loss function is:
exp(s)./(1 + exp(s)) = 1./(1 + exp(-s))
In my algorithm, the value of s = X*beta. Here X is a matrix with N data points and P features per data point (i.e. size(X)=[N,P]) and beta is a vector of P coefficients for each feature such that size(beta)=[P 1].
I am specifically interested in calculating the average value and gradient of the Logistic function for given value of beta.
The average value of the Logistic function w.r.t to a value of beta is:
L = 1/N * sum(log(1+exp(X*beta)),1)
The average value of the slope of the Logistic function w.r.t. to a value of b is:
dL = 1/N * sum((exp(X*beta)./(1+exp(X*beta))' X, 1)'
Note that size(dL) = [P 1].
My issue is that these expressions keep producing numerical overflows. The problem effectively comes from the fact that exp(s)=Inf when s>1000 and exp(s)=0 when s<-1000.
I am looking for a solution such that s can take on any value in floating point arithmetic. Ideally, I would also really appreciate a solution that allows me to evaluate the value and gradient in a vectorized / efficient way.
How about the following approximations:
– For computing L, if s is large, then exp(s) will be much larger than 1:
1 + exp(s) ≅ exp(s)
and consequently
log(1 + exp(s)) ≅ log(exp(s)) = s.
If s is small, then using the Taylor series of exp()
exp(s) ≅ 1 + s
and using the Taylor series of log()
log(1 + exp(s)) ≅ log(2 + s) ≅ log(2) + s / 2.
– For computing dL, for large s
exp(s) ./ (1 + exp(s)) ≅ 1
and for small s
exp(s) ./ (1 + exp(s)) ≅ 1/2 + s / 4.
– The code to compute L could look for example like this:
s = X*beta;
l = log(1+exp(s));
ind = isinf(l);
l(ind) = s(ind);
ind = (l == 0);
l(ind) = log(2) + s(ind) / 2;
L = 1/N * sum(l,1)
I found a good article about this problem.
Cutting through a lot of words, we can simplify the argument to stating that the original expression
log(1 + exp(s))
can be rewritten as
log(exp(s)*(exp(-s) + 1))
= log(exp(s)) + log(exp(-s) + 1)
= s + log(exp(-s) + 1)
This stops overflow from occurring - it doesn't prevent underflow, but by the time that occurs, you have your answer (namely, s). You can't just use this instead of the original, since it will still give you problems. However, we now have the basis for a function that can be written that will be accurate and won't produce over/underflow:
function LL = logistic(s)
if s<0
LL = log(1 + exp(s));
else
LL = s + logistic(-s);
I think this maintains reasonably good accuracy.
EDIT now to the meat of your question - making this vectorized, and allowing the calculation of the slope as well. Let's take these one at a time:
function LL = logisticVec(s)
LL = zeros(size(s));
LL(s<0) = log(1 + exp(s(s<0)));
LL(s>=0) = s(s>=0) + log(1 + exp(-s(s>=0)));
To obtain the average you wanted:
L = logisticVec(X*beta) / N;
The slope is a little bit trickier; note I believe you may have a typo in your expression (missing a multiplication sign).
dL/dbeta = sum(X * exp(X*beta) ./ (1 + exp(X*beta))) / N;
If we divide top and bottom by exp(X*beta) we get
dL = sum(X ./ (exp(-X*beta) + 1)) / N;
Once again, the overflow has gone away and we are left with underflow - but since the underflowed value has 1 added to it, the error this creates is insignificant.