Loss clipping in tensor flow (on DeepMind's DQN) - neural-network

I am trying my own implementation of the DQN paper by Deepmind in tensor flow and am running into difficulty with clipping of the loss function.
Here is an excerpt from the nature paper describing the loss clipping:
We also found it helpful to clip the error term from the update to be between −1 and 1. Because the absolute value loss function |x| has a derivative of −1 for all negative values of x and a derivative of 1 for all positive values of x, clipping the squared error to be between −1 and 1 corresponds to using an absolute value loss function for errors outside of the (−1,1) interval. This form of error clipping further improved the stability of the algorithm.
(link to full paper: http://www.nature.com/nature/journal/v518/n7540/full/nature14236.html)
What I have tried so far is using
clipped_loss_vec = tf.clip_by_value(loss, -1, 1)
to clip the loss I calculate between -1 and +1. The agent is not learning the proper policy in this case. I printed out the gradients of the network and realized that if the loss falls below -1, the gradients all suddenly turn to 0!
My reasoning for this happening is that the clipped loss is a constant function in (-inf,-1) U (1,inf), which means it has zero gradient in those regions. This in turn ensures that the gradients throughout the network are zero (think of it as, whatever input image I provide the network, the loss stays at -1 in the local neighborhood because it has been clipped).
So, my question is two parts:
What exactly did Deepmind mean in the excerpt? Did they mean that the loss below -1 is clipped to -1 and above +1 is clipped to +1. If so, how did they deal with the gradients (i.e. what is all that part about absolute value functions?)
How should I implement loss clipping in tensor flow such that the gradients do not go to zero outside the clipped range (but maybe stay at +1 and -1)?
Thanks!

I suspect they mean that you should clip the gradient to [-1,1], not clip the loss function. Thus, you compute the gradient as usual, but then clip each component of the gradient to be in the range [-1,1] (so if it is larger than +1, you replace it with +1; if it is smaller than -1, you replace it with -1); and then you use the result in the gradient descent update step instead of using the unmodified gradient.
Equivalently: Define a function f as follows:
f(x) = x^2 if x in [-0.5,0.5]
f(x) = |x| - 0.25 if x < -0.5 or x > 0.5
Instead of using something of the form s^2 as the loss function (where s is some complicated expression), they suggest to use f(s) as the loss function. This is some kind of hybrid between squared-loss and absolute-value-loss: will behave like s^2 when s is small, but when s gets larger, it will behave like the absolute value (|s|).
Notice that the derivative of f has the nice property that its derivative will always be in the range [-1,1]:
f'(x) = 2x if x in [-0.5,0.5]
f'(x) = +1 if x > +1
f'(x) = -1 if x < -1
Thus, when you take the gradient of this f-based loss function, the result will be the same as computing the gradient of a squared-loss and then clipping it.
Thus, what they're doing is effectively replacing a squared-loss with a Huber loss. The function f is just two times the Huber loss for delta = 0.5.
Now the point is that the following two alternatives are equivalent:
Use a squared loss function. Compute the gradient of this loss function, but the gradient to [-1,1] before doing the update step of the gradient descent.
Use a Huber loss function instead of a squared loss function. Compute the gradient of this loss function directly (unchanged) in the gradient descent.
The former is easy to implement. The latter has nice properties (improves stability; it's better than absolute-value-loss because it avoids oscillating around the minimum). Because the two are equivalent, this means we get an easy-to-implement scheme that has the simplicity of squared-loss with the stability and robustness of the Huber loss.

First of all, the code for the paper is available online, which constitutes an invaluable reference.
Part 1
If you take a look at the code you will see that, in nql:getQUpdate (NeuralQLearner.lua, line 180), they clip the error term of the Q-learning function:
-- delta = r + (1-terminal) * gamma * max_a Q(s2, a) - Q(s, a)
if self.clip_delta then
delta[delta:ge(self.clip_delta)] = self.clip_delta
delta[delta:le(-self.clip_delta)] = -self.clip_delta
end
Part 2
In TensorFlow, assuming the last layer of your neural network is called self.output, self.actions is a one-hot encoding of all actions, self.q_targets_ is a placeholder with the targets, and self.q is your computed Q:
# The loss function
one = tf.Variable(1.0)
delta = self.q - self.q_targets_
absolute_delta = tf.abs(delta)
delta = tf.where(
absolute_delta < one,
tf.square(delta),
tf.ones_like(delta) # squared error: (-1)^2 = 1
)
Or, using tf.clip_by_value (and having an implementation closer to the original):
delta = tf.clip_by_value(
self.q - self.q_targets_,
-1.0,
+1.0
)

No. They talk about error clipping actually, not about loss clipping which is however as far as I know referring to the same thing but leads to confusion. They DO NOT mean that the loss below -1 is clipped to -1 and the loss above +1 is clipped to +1 because that leads to zero gradients outside the error range [-1;1] as you realized. Instead, they suggest to use a linear loss instead of a quadratic loss for error values < -1 and error values > 1.
Compute the error value (r + \gamma \max_{a'} Q(s',a'; \theta_i^-) - Q(s,a; \theta_i)). If this error value is within the range [-1;1], square it, if the error value is < -1 multiply by -1, if the error value is > 1 leave it as it is. If you use this as loss function the gradients outside the interval [-1;1] won't vanish.
In order to have a "smooth-looking" compound loss function you could also replace the squared loss outside the error range [-1;1] with a first-order Taylor approximation at the border values -1 and 1. In this case, if e was your error value, you would square it in case e \in [-1;1], in case e < -1, replace it by -2e-1, in case e > 1, replace it by 2e-1.

In the Deep Mind paper you reference, they limit the gradient of the loss. This prevents giant gradients and so improves robustness. They do this by using a quadratic loss function for errors inside a small range, and using an absolute value loss for larger errors.
I suggest implementing the Huber loss function. Below is a python tensorflow implementation.
def huber_loss(y_true, y_pred, max_grad=1.):
"""Calculates the huber loss.
Parameters
----------
y_true: np.array, tf.Tensor
Target value.
y_pred: np.array, tf.Tensor
Predicted value.
max_grad: float, optional
Positive floating point value. Represents the maximum possible
gradient magnitude.
Returns
-------
tf.Tensor
The huber loss.
"""
err = tf.abs(y_true - y_pred, name='abs')
mg = tf.constant(max_grad, name='max_grad')
lin = mg*(err-.5*mg)
quad=.5*err*err
return tf.where(err < mg, quad, lin)

Related

Optimization algorithm in Matlab

I want to calculate maximum of the function CROSS-IN-TRAY which is
shown here:
So I have made this function in Matlab:
function f = CrossInTray2(x)
%the CrossInTray2 objective function
%
f = 0.0001 *(( abs(sin(x(:,1)).* sin(x(:,2)).*exp(abs(100 - sqrt(x(:,1).^2 + x(:,2).^2)/3.14159 )) )+1 ).^0.1);
end
I multiplied the whole formula by (-1) so the function is inverted so when I will be looking for the minimum of the inverted formula it will be actually the maximum of original one.
Then when I go to optimization tools and select the GA algorithm and define lower and upper bounds as -3 and 3 it shows me the result after about 60 iterations which is about 0.13 and the final point is something like [0, 9.34].
And how is this possible that the final point is not in the range defined by the bounds? And what is the actual maximum of this function?
The maximum is (0,0) (actually, when either input is 0, and periodically at multiples of pi). After you negate, you're looking for a minimum of a positive quantity. Just looking at the outer absolute value, it obviously can't get lower than 0. That trivially occurs when either value of sin(x) is 0.
Plugging in, you have f_min = f(0,0) = .0001(0 + 1)^0.1 = 1e-4
This expression is trivial to evaluate and plot over a 2d grid. Do that until you figure out what you're looking at, and what the approximate answer should be, and only then invoke an actual optimizer. GA does not sound like a good candidate for a relatively smooth expression like this. The reason you're getting strange answers is the fact that only one of the input parameters has to be 0. Once the optimizer finds one of those, the other input could be anything.

How to make sense (handle) when computes logarithm of zero in prior information

I am working in image classification. I am using an information that called prior probability (in Bayesian rule). It has range in [0,1]. And it requires computing in logarithm. However, as you know, logarithm of zero number is Inf.
For example, given an pixel x in image I (size 3 by 3) with an cost function such as
Cost(x)=30+log(prior(x))
where prior is an matrix 3 by 3
prior=[ 0 0 0.5;
1 1 0.2;
0.4 0 0]
I =[ 1 2 3;
4 5 6;
7 8 9]
I want to compute cost of x=1 then
cost(x=1)=30+log(0)
Now, log(0) is Inf. Then result cost(x=1) also Inf. Based on my assumption that prior=0 that mean the given pixel belongs to background, and prior=1 that mean the given pixel belongs to foreground.
My question is that how to compute log(prior) satisfy my assumption.
I am using Matlab to do it. I think that log(0) becomes very small negative value. And I just set it is -9 as my code
%% Handle with log(0)
prior(prior==0.0) = NaN;
%% Compute log
log_prior=log(prior);
%% Assume that e^-9 very near 0.
log_prior(isnan(log_prior)) = -9;
UPDATE: To make clearly what I am doing. Let see the Bayesian rule. My task is that how to assign an given pixel x belongs to Background (BG) or Foreground (FG). It will depends on the probability
P(x∈BG|x)=P(x|x∈BG)P(x∈BG)/P(x)
In which P(x|x∈BG) is likelihood function and assume that it is approximated by Gaussian distribution, P(x∈BG) is prior term and P(x) can be ignore due to it is const
Using Maximum-a-Posteriori (MAP) Estimation we can map the above equation in to log space (to resolve exponential in Gaussian function)
Cost(x)=log(P(x∈BG|x))=log(P(x|x∈BG))+log(P(x∈BG))
To make simple, let assume log(P(x|x∈BG))=30, log(P(x∈BG)) is log(prior) then my cost function can rewritten as
Cost(x)=30+log(prior(x))
Now problem is that prior is within [0,1] then it logarithm is -Inf. As the chepner said, we can add eps value as
log(prior+eps)
However, log(eps) is very a lager negative number. It will be affected my cost function (also becomes very large negative number). Then the first term in my cost function (30) becomes not necessary. Based on my assumption that log(x)=1 then the pixel x will be BG and prior(x)=1 will be FG. How to make handle with my log(prior) when I compute my cost function?
The correct thing to do, before fiddling with Matlab, is to try to understand your problem. Ask yourself "what does it mean for the prior probability to vanish?". The answer is given by Bayes theorem, one form of which is:
posterior = likelihood * prior / normalization
So places where the prior is nil are, by definition, places where you are certain that your events (the things whose probabilities you are computing) cannot happen, regardless of their apparent likelihood (i.e. "cost"). So they are not interesting for you. You just recognize that and skip them.

MATLAB: FFT with specified modes

Using MATLAB I want to implement some kind of a spectral method. The idea is as following (described for a example which is working).
Dirichlet (and Neumann, and periodic) boundaries leads to eigenvalues in the fourier space of k=n*pi/L
Projecting all the linear operators in the fourier space to the discretized k-values:
e.g. L = -D*(k.*k) (for diffusion only)
Defining the propagator in time as P = exp( dt * L )
Calculating iteratively the evolution in time by uh_{n+1} = uh_n * P
return the calculated value to the real space every time I want to save the value by ifft( uh )
My question concerns another boundary conditions.
In my case I have Robin boundary conditions. So, the eigenvalues are defined through some weird equation of the form tan( x ) = x or the like. The problem of computing them is solved.
As I have the values, the step no. 2 and 3 is simple too, but:
For applying P on the fourier-transformed vector uh I have to ensure that my uh = fft(u) uses the same eigenvalues, which is not the case by default.
By default MATLAB uses equidistant modes for the fft.
Is there any simple trick for this?
Or, maybe, do I have any mistake in my thoughts?

Minimization of L1-Regularized system, converging on non-minimum location?

This is my first post to stackoverflow, so if this isn't the correct area I apologize. I am working on minimizing a L1-Regularized System.
This weekend is my first dive into optimization, I have a basic linear system Y = X*B, X is an n-by-p matrix, B is a p-by-1 vector of model coefficients and Y is a n-by-1 output vector.
I am trying to find the model coefficients, I have implemented both gradient descent and coordinate descent algorithms to minimize the L1 Regularized system. To find my step size I am using the backtracking algorithm, I terminate the algorithm by looking at the norm-2 of the gradient and terminating if it is 'close enough' to zero(for now I'm using 0.001).
The function I am trying to minimize is the following (0.5)*(norm((Y - X*B),2)^2) + lambda*norm(B,1). (Note: By norm(Y,2) I mean the norm-2 value of the vector Y) My X matrix is 150-by-5 and is not sparse.
If I set the regularization parameter lambda to zero I should converge on the least squares solution, I can verify that both my algorithms do this pretty well and fairly quickly.
If I start to increase lambda my model coefficients all tend towards zero, this is what I expect, my algorithms never terminate though because the norm-2 of the gradient is always positive number. For example, a lambda of 1000 will give me coefficients in the 10^(-19) range but the norm2 of my gradient is ~1.5, this is after several thousand iterations, While my gradient values all converge to something in the 0 to 1 range, my step size becomes extremely small (10^(-37) range). If I let the algorithm run for longer the situation does not improve, it appears to have gotten stuck somehow.
Both my gradient and coordinate descent algorithms converge on the same point and give the same norm2(gradient) number for the termination condition. They also work quite well with lambda of 0. If I use a very small lambda(say 0.001) I get convergence, a lambda of 0.1 looks like it would converge if I ran it for an hour or two, a lambda any greater and the convergence rate is so small it's useless.
I had a few questions that I think might relate to the problem?
In calculating the gradient I am using a finite difference method (f(x+h) - f(x-h))/(2h)) with an h of 10^(-5). Any thoughts on this value of h?
Another thought was that at these very tiny steps it is traveling back and forth in a direction nearly orthogonal to the minimum, making the convergence rate so slow it is useless.
My last thought was that perhaps I should be using a different termination method, perhaps looking at the rate of convergence, if the convergence rate is extremely slow then terminate. Is this a common termination method?
The 1-norm isn't differentiable. This will cause fundamental problems with a lot of things, notably the termination test you chose; the gradient will change drastically around your minimum and fail to exist on a set of measure zero.
The termination test you really want will be along the lines of "there is a very short vector in the subgradient."
It is fairly easy to find the shortest vector in the subgradient of ||Ax-b||_2^2 + lambda ||x||_1. Choose, wisely, a tolerance eps and do the following steps:
Compute v = grad(||Ax-b||_2^2).
If x[i] < -eps, then subtract lambda from v[i]. If x[i] > eps, then add lambda to v[i]. If -eps <= x[i] <= eps, then add the number in [-lambda, lambda] to v[i] that minimises v[i].
You can do your termination test here, treating v as the gradient. I'd also recommend using v for the gradient when choosing where your next iterate should be.

The deconv() function in MATLAB does not invert the conv() function

I would like to convolve a time-series containing two spikes (call it Spike) with an exponential kernel (k) in MATLAB. Call the convolved response "calcium1". I would like to recover the original spike ("reconSpike") data using deconvolution with the kernel. I am using the following code.
k1=zeros(1,5000);
k1(1:1000)=(1.1.^((1:1000)/100)-(1.1^0.01))/((1.1^10)-1.1^0.01);
k1(1001:5000)=exp(-((1001:5000)-1001)/1000);
k1(1)=k1(2);
spike = zeros(100000,1);
spike(1000)=1;
spike(1100)=1;
calcium1=conv(k1, spike);
reconSpike1=deconv(calcium1, k1);
The problem is that at the end of reconSpike, I get a chunk of very large, high amplitude waves that was not in the original data. Anyone know why and how to fix it?
Thanks!
It works for me if you keep the spike vector the same length as the k1 vector. i.e.:
k1=zeros(1,5000);
k1(1:1000)=(1.1.^((1:1000)/100)-(1.1^0.01))/((1.1^10)-1.1^0.01);
k1(1001:5000)=exp(-((1001:5000)-1001)/1000);
k1(1)=k1(2);
spike = zeros(5000, 1);
spike(1000)=1;
spike(1100)=1;
calcium1=conv(k1, spike);
reconSpike1=deconv(calcium1, k1);
Any reason you made them different?
You are running into either a problem with MATLAB's deconvolution algorithm, or floating point precision problems (or maybe both). I suspect it's floating point precision due to all the divisions and subtractions that take place during the deconvolution, but it might be worth contacting MathWorks directly to ask what they think.
Per MATLAB documentation, if [q,r] = deconv(v,u), then v = conv(u,q)+r must also hold (i.e., the output of deconv should always satisfy this). In your case this is violently violated. Put the following at the end of your script:
[reconSpike1 rem]=deconv(calcium1, k1);
max(conv(k1, reconSpike1) + rem - calcium1)
I get 6.75e227, which is not zero ;-) Next try changing the length of spike to 6000; you will get a small number (~1e-15). Gradually increase the length of spike; the error will get larger and larger. Note that if you put only one non-zero element into your spike, this behavior doesn't happen: the error is always zero. It makes sense; all MATLAB needs to do is divide everything by the same number.
Here's a simple demonstration using random vectors:
v = random('uniform', 1,2,100,1);
u = random('uniform', 1,2,100,1);
[q r] = deconv(v,u);
fprintf('maximum error for length(v) = 100 is %f\n', max(conv(u, q) + r - v))
v = random('uniform', 1,2,1000,1);
[q r] = deconv(v,u);
fprintf('maximum error for length(v) = 1000 is %f\n', max(conv(u, q) + r - v))
The output is:
maximum error for length(v) = 100 is 0.000000
maximum error for length(v) = 1000 is 14.910770
I don't know what you are really trying to accomplish, so it's hard to give further advice. But I'll just point out that if you have a problem where pulses are piling up and you want to extract information about each pulse, this can be a tricky problem. I know some people who work on things like this, so if you want some references let me know and I will ask them.
You should never expect that a deconvolution can simply undo a convolution. This is because the deconvolution is an ill-posed problem.
The problem comes from the fact that the convolution is an integral operator (in the continuous case you write down an integral int f(x) g(x-t) dx or something similar). Now, the inverse of computing an integral (the de-convolution) is to apply a differentiation. Unfortunately, the differential amplifies noise in the input. Thus, if your integral only has slight errors on it (and floating-point inaccuarcies might already be enough), you end up with a total different outcome after differentiation.
There are some possibilities how this amplification can be mitigated but these have to be tried on a per-application basis.