Keras get_weight interpretation for RNNs - neural-network

When I am running this code with Keras:
networkDrive = Input(batch_shape=(1,length,1))
network = SimpleRNN(3, activation='tanh', stateful=False, return_sequences=True)(networkDrive)
generatorNetwork = Model(networkDrive, network)
predictions = generatorNetwork.predict(noInput, batch_size=length)
print(np.array(generatorNetwork.layers[1].get_weights()))
I am getting this output
[array([[ 0.91814435, 0.2490257 , 1.09242284]], dtype=float32)
array([[-0.42028981, 0.68996912, -0.58932084],
[-0.88647962, -0.17359462, 0.42897415],
[ 0.19367599, 0.70271438, 0.68460363]], dtype=float32)
array([ 0., 0., 0.], dtype=float32)]
I suppose, that the (3,3) Matrix is the weight matrix, connecting the RNN Units with each other, and one of the two arrays probably is the bias
But what is the third?

In simpleRNN implementation there are indeed 3 sets of weights needed.
weights[0] is the input matrix. It transforms the input and therefore has a shape [input_dim, output_dim]
weights[1] is the recurent matrix. It transforms the recurrent state and has a shape [output_dim, output_dim]
weights[2] is the bias matrix. It is added to the output and has a shape [output_dim]
the results of the three operations are summed and then go through an activation layer.
I hope this is now clearer ?

Related

How to build a recurrent neural net in Keras where each input goes through a layer first?

I'm trying to build an neural net in Keras that would look like this:
Where x_1, x_2, ... are input vectors that undergo the same transformation f. f is itself a layer whose parameters must be learned. The sequence length n is variable across instances.
I'm having trouble understanding two things here:
What should the input look like?
I'm thinking of a 2D tensor with shape (number_of_x_inputs, x_dimension), where x_dimension is the length of a single vector $x$. Can such 2D tensor have a variable shape? I know tensors can have variable shapes for batch processing, but I don't know if that helps me here.
How do I pass each input vector through the same transformation before feeding it to the RNN layer?
Is there a way to sort of extend for example a GRU so that an f layer is added before going through the actual GRU cell?
I'm not an expert, but I hope this helps.
Question 1:
Vectors x1, x2... xn can have different shapes, but I'm not sure if the instances of x1 can have different shapes. When I have different shapes I usually pad the short sequences with 0s.
Question 2:
I'm not sure about extending a GRU, but I would do something like this:
x_dims = [50, 40, 30, 20, 10]
n = 5
def network():
shared_f = Conv1D(5, 3, activation='relu')
shated_LSTM = LSTM(10)
inputs = []
to_concat = []
for i in range(n):
x_i = Input(shape=(x_dims[i], 1), name='x_' + str(i))
inputs.append(x_i)
step1 = shared_f(x_i)
to_concat.append(shated_LSTM(step1))
merged = concatenate(to_concat)
final = Dense(2, activation='softmax')(merged)
model = Model(inputs=inputs, outputs=[final])
# model = Model(inputs=[sequence], outputs=[part1])
model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
return model
m = network()
In this example, I used a Conv1D as the shared f transformation, but you could use something else (Embedding, etc.).

Impact of using relu for gradient descent

What impact does the fact the relu activation function does not contain a derivative ?
How to implement the ReLU function in Numpy implements relu as maximum of (0 , matrix vector elements).
Does this mean for gradient descent we do not take derivative of relu function ?
Update :
From Neural network backpropagation with RELU
this text aids in understanding :
The ReLU function is defined as: For x > 0 the output is x, i.e. f(x)
= max(0,x)
So for the derivative f '(x) it's actually:
if x < 0, output is 0. if x > 0, output is 1.
The derivative f '(0) is not defined. So it's usually set to 0 or you
modify the activation function to be f(x) = max(e,x) for a small e.
Generally: A ReLU is a unit that uses the rectifier activation
function. That means it works exactly like any other hidden layer but
except tanh(x), sigmoid(x) or whatever activation you use, you'll
instead use f(x) = max(0,x).
If you have written code for a working multilayer network with sigmoid
activation it's literally 1 line of change. Nothing about forward- or
back-propagation changes algorithmically. If you haven't got the
simpler model working yet, go back and start with that first.
Otherwise your question isn't really about ReLUs but about
implementing a NN as a whole.
But this still leaves some confusion as the neural network cost function typically takes derivative of activation function, so for relu how does this impact cost function ?
The standard answer is that the input to ReLU is rarely exactly zero, see here for example, so it doesn't make any significant difference.
Specifically, for ReLU to get a zero input, the dot product of one entire row of the input to a layer with one entire column of the layer's weight matrix would have to be exactly zero. Even if you have an all-zero input sample, there should still be a bias term in the last position, so I don't really see this ever happening.
However, if you want to test for yourself, try implementing the derivative at zero as 0, 0.5, and 1 and see if anything changes.
The PyTorch docs give a simple neural network with numpy example with one hidden layer and relu activation. I have reproduced it below with a fixed random seed and three options for setting the behavior of the ReLU gradient at 0. I have also added a bias term.
N, D_in, H, D_out = 4, 2, 30, 1
# Create random input and output data
x = x = np.random.randn(N, D_in)
x = np.c_(x, no.ones(x.shape[0]))
y = x = np.random.randn(N, D_in)
np.random.seed(1)
# Randomly initialize weights
w1 = np.random.randn(D_in+1, H)
w2 = np.random.randn(H, D_out)
learning_rate = 0.002
loss_col = []
for t in range(200):
# Forward pass: compute predicted y
h = x.dot(w1)
h_relu = np.maximum(h, 0) # using ReLU as activate function
y_pred = h_relu.dot(w2)
# Compute and print loss
loss = np.square(y_pred - y).sum() # loss function
loss_col.append(loss)
print(t, loss, y_pred)
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred = 2.0 * (y_pred - y) # the last layer's error
grad_w2 = h_relu.T.dot(grad_y_pred)
grad_h_relu = grad_y_pred.dot(w2.T) # the second laye's error
grad_h = grad_h_relu.copy()
grad_h[h < 0] = 0 # grad at zero = 1
# grad[h <= 0] = 0 # grad at zero = 0
# grad_h[h < 0] = 0; grad_h[h == 0] = 0.5 # grad at zero = 0.5
grad_w1 = x.T.dot(grad_h)
# Update weights
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2

Simple understanding of Orthogonal Distance Regression (ODR)

I have some data points with errors in both the x and y coordinates on these data points. I therefore want to use python's ODR tool to compute the best-fit slope and the error on this slope. I have tried doing it for my actual data but do not find good results. Therefore, I have first tried to use ODR with a simple example as follows:
import numpy as np
import matplotlib.pyplot as plt
from scipy.odr import *
def linear_func(B, x):
return B[0]*x+B[1]
x_data=np.array([0.0, 1.0, 2.0, 3.0])
y_data=np.array([0.0, 1.0, 2.0, 3.0])
x_err=np.array([1.0, 1.0, 1.0, 1.0])
y_err=np.array([5.0, 5.0, 5.0, 5.0])
linear=Model(linear_func)
data=RealData(x_data, y_data, sx=x_err, sy=y_err)
odr=ODR(data, linear, beta0=[1.0, 0.0])
out=odr.run()
out.pprint()
The pprint() line gives:
Beta: [ 1. 0.]
Beta Std Error: [ 0. 0.]
Beta Covariance: [[ 5.20000039 -7.80000026]
[ -7.80000026 18.1999991 ]]
Residual Variance: 0.0
Inverse Condition #: 0.0315397386692
Reason(s) for Halting:
Sum of squares convergence
The resutling Beta values are shown to be 1.0 and 0.0, which I would epect. But why are the standard errors, Beta Std Error, also both zero if my errors on the data points are quite large? Can anyone offer some insight?
I see no discrepancy here. Your example model fits your data perfectly, so the weights you pass to the data do not matter. Moreover, your initial guess beta0=[1.0, 0.0] is a parameter vector giving an optimal solution, so the ODR machinery can not find an iterative improvement of the parameters and quits after zero iterations. The associated errors are zero because for a given data the solution found is infinitely better than any other solution possible because your sum of squares at B=[1, 0] is zero.
To see the what actually happens inside ODR.run() function, add odr.set_iprint(init=2, iter=2, final=2) before you run the regression. In particular, the following output confirms that ODR reaches the stopping condition immediately:
--- STOPPING CONDITIONS:
INFO = 1 ==> SUM OF SQUARES CONVERGENCE.
NITER = 0 (NUMBER OF ITERATIONS)
Note how the errors will not be zero, and NITER will be an integer number if either your x_data is unequal to y_data or if beta0 does not match the optimal solution. In that case, the errors returned by ODR will be nonzero, although still incredibly small.

Selectively zero weights in TensorFlow?

Lets say I have an NxM weight variable weights and a constant NxM matrix of 1s and 0s mask.
If a layer of my network is defined like this (with other layers similarly defined):
masked_weights = mask*weights
layer1 = tf.relu(tf.matmul(layer0, masked_weights) + biases1)
Will this network behave as if the corresponding 0s in mask are zeros in weights during training? (i.e. as if the connections represented by those weights had been removed from the network entirely)?
If not, how can I achieve this goal in TensorFlow?
The answer is yes. The experiment depicts the following graph.
The implementation is:
import numpy as np, scipy as sp, tensorflow as tf
x = tf.placeholder(tf.float32, shape=(None, 3))
weights = tf.get_variable("weights", [3, 2])
bias = tf.get_variable("bias", [2])
mask = tf.constant(np.asarray([[0, 1], [1, 0], [0, 1]], dtype=np.float32)) # constant mask
masked_weights = tf.multiply(weights, mask)
y = tf.nn.relu(tf.nn.bias_add(tf.matmul(x, masked_weights), bias))
loss = tf.losses.mean_squared_error(tf.constant(np.asarray([[1, 1]], dtype=np.float32)),y)
weights_grad = tf.gradients(loss, weights)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print("Masked weights=\n", sess.run(masked_weights))
data = np.random.rand(1, 3)
print("Graident of weights\n=", sess.run(weights_grad, feed_dict={x: data}))
sess.close()
After running the code above, you will see the gradients are masked as well. In my example, they are:
Graident of weights
= [array([[ 0. , -0.40866762],
[ 0.34265977, -0. ],
[ 0. , -0.35294518]], dtype=float32)]
The answer is yes and the reason lies in backpropogation as explained below.
mask_w = mask * w
del(mask_w) = mask * del(w).
The mask will make the gradient 0 wherever its value is zero. Wherever its value is 1, gradient will flow as previously. This is a common trick used in seq2seq predictions to mask the different size output in decoding layer. You can read more about this here.

Multi-class regression in nolearn?

I'm trying to build a Neural Network using nolearn that can do regression on multiple classes.
For example:
net = NeuralNet(layers=layers_s,
input_shape=(None, 2048),
l1_num_units=8000,
l2_num_units=4000,
l3_num_units=2000,
l4_num_units=1000,
d1_p = 0.25,
d2_p = 0.25,
d3_p = 0.25,
d4_p = 0.1,
output_num_units=noutput,
output_nonlinearity=None,
regression=True,
objective_loss_function=lasagne.objectives.squared_error,
update_learning_rate=theano.shared(float32(0.1)),
update_momentum=theano.shared(float32(0.8)),
on_epoch_finished=[
AdjustVariable('update_learning_rate', start=0.1, stop=0.001),
AdjustVariable('update_momentum', start=0.8, stop=0.999),
EarlyStopping(patience=200),
],
verbose=1,
max_epochs=1000)
noutput is the number of classes for which I want to do regression, if I set this to 1 everything works. When I use 26 (the number of classes here) as output_num_unit I get a Theano dimension error. (dimension mismatch in args to gemm (128,1000)x(1000,26)->(128,1))
The Y labels are continues variables, corresponding to a class. I tried to reshape the Y labels to (rows,classes) but this means I have to give a lot of the Y labels a value of 0 (because the value for that class is unknown). Is there any way to do this without setting some y_labels to 0?
If you want to do multiclass (or multilabel) regression with 26 classes, your output must not have shape (1082,), but (1082, 26). In order to preprocess your output, you can use sklearn.preprocessing.label_binarize
which will transform your 1D output to 2D output.
Also, your output non linearity should be a softmax function, so that the rows of your output sum to 1.