How to initialise fixed weights - neural-network

I would like to fix the initial weights for the neural network I created.
Currently, I have initialized the weights as given below.
Is there a way by which can I initialize one set of fixed random weights? so that every time I run the code the initialized array is the same.
def InitializeWeights(nodes):
layers, weights = len(nodes), []
for i in range(1, layers):
w = [[np.random.uniform(-1, 1) #randomise weights
for k in range(nodes[i-1] + 1)]
for j in range(nodes[i])]
weights.append(np.matrix(w))
return weights

You should try setting the seed of the randomness generator in Tensorflow to a fixed, arbitrary value at the beginning of your experiment. This way, running the initialization will generate the same results all the time:
tf.set_random_seed(42)
# Initialize weights the standard way! (just define tf.keras layers or similar)
Optionally (if you're defining layers at a lower level) you could set individual seeds for each weight generation
W = tf.Variable(tf.truncated_normal(((10,10)), stddev=0.1, seed=42))

Related

Keras: make specific weights in a dense layer untrainable [duplicate]

I am using keras and tensorflow 1.4.
I want to explicitly specify which neurons are connected between two layers. Therefor I have a matrix A with ones in it, whenever neuron i in the first Layer is connected to neuron j in the second Layer and zeros elsewhere.
My first attempt was to create a custom layer with a kernel, that has the same size as A with non-trainable zeros in it, where A has zeros in it and trainable weights, where A has ones in it. Then, the desired output would be a simple dot-product. Unfortunately I did not manage to figure out, how to implement a kernel that is partly trainable and partly non-trainable.
Any suggestions?
(Building a functional model with a lot of neurons that are connected by hand could be a work around, but somehow 'ugly' solution)
The simplest way I can think of, if you have this matrix correctly shaped, is to derive the Dense layer and simply add the matrix in the code multiplying the original weights:
class CustomConnected(Dense):
def __init__(self,units,connections,**kwargs):
#this is matrix A
self.connections = connections
#initalize the original Dense with all the usual arguments
super(CustomConnected,self).__init__(units,**kwargs)
def call(self,inputs):
#change the kernel before calling the original call:
self.kernel = self.kernel * self.connections
#call the original calculations:
super(CustomConnected,self).call(inputs)
Using:
model.add(CustomConnected(units,matrixA))
model.add(CustomConnected(hidden_dim2, matrixB,activation='tanh')) #can use all the other named parameters...
Notice that all the neurons/units have yet a bias added at the end. The argument use_bias=False will still work if you don't want biases. You can also do exactly the same thing using a vector B, for instance, and mask the original biases with self.biases = self.biases * vectorB
Hint for testing: use different input and output dimensions, so you can be sure that your matrix A has the correct shape.
I just realized that my code is potentially buggy, because I'm changing a property that is used by the original Dense layer. If weird behaviors or messages appear, you can try another call method:
def call(self, inputs):
output = K.dot(inputs, self.kernel * self.connections)
if self.use_bias:
output = K.bias_add(output, self.bias)
if self.activation is not None:
output = self.activation(output)
return output
Where K comes from import keras.backend as K.
You may also go further and set a custom get_weights() method if you want to see the weights masked with your matrix. (This would not be necessary in the first approach above)

Tensorflow max-margin loss training?

I want to train a neural network in tensorflow with a max-margin loss function using one negative sample per positive sample:
max(0,1 -pos_score +neg_score)
What I'm currently doing is this:
The network takes three inputs: input1, and then one positive example input2_pos and one negative example input2_neg. (These are indices to a word embeddings layer.) The network is supposed to calculate a score that expresses how related two examples are.
Here's a simplified version of my code:
input1 = tf.placeholder(dtype=tf.int32, shape=[batch_size])
input2_pos = tf.placeholder(dtype=tf.int32, shape=[batch_size])
input2_neg = tf.placeholder(dtype=tf.int32, shape=[batch_size])
# f is a neural network outputting a score
pos_score = f(input1,input2_pos)
neg_score = f(input1,input2_neg)
cost = tf.maximum(0., 1. -pos_score +neg_score)
optimizer= tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
What I see when I run this, is that like this the network just learns which input holds the positive example - it always predicts a similar score along the lines of:
pos_score = 0.9965983
neg_score = 0.00341663
How can I structure the variables/training so that the network learns the task instead?
I want just one network that takes two inputs and calculates a score expressing the correlation between them, and train it with max-margin loss.
Calculating scores for positive and negative separately does not seem like an option to me, since then it won't backpropagate properly. Another option seems to be randomizing inputs - but then for the loss function I need to know which example is the positive one - inputting that as another parameter would give away the solution again?
Any ideas?
Given your results (1 for every positive, 0 for every negative) it seems you have two different networks learning:
to predict 1 for the first one
to predict 0 for the second one
When using max-margin loss, you need to use the same network for computing both pos_score and neg_score. The way to do that is to share the variables. I will give you a small example using tf.get_variable():
with tf.variable_scope("network"):
w = tf.get_variable("weights", shape=..., initializer=...)
def f(x, y):
with tf.variable_scope("network", reuse=True):
w = tf.get_variable("weights")
res = w * (x - y) # some computation
return res
With this function f as model, the training will optimize the shared variable with name "network/weights".

Using SparseTensor as a trainable variable?

I'm trying to use SparseTensor to represent weight variables in a fully-connected layer.
However, it seems that TensorFlow 0.8 doesn't allow to use SparseTensor as tf.Variable.
Is there any way to go around this?
I've tried
import tensorflow as tf
a = tf.constant(1)
b = tf.SparseTensor([[0,0]],[1],[1,1])
print a.__class__ # shows <class 'tensorflow.python.framework.ops.Tensor'>
print b.__class__ # shows <class 'tensorflow.python.framework.ops.SparseTensor'>
tf.Variable(a) # Variable is declared correctly
tf.Variable(b) # Fail
By the way, my ultimate goal of using SparseTensor is to permanently mask some of connections in dense form. Thus, these pruned connections are ignored while calculating and applying gradients.
In my current implementation of MLP, SparseTensor and its sparse form of matmul ops successfully reports inference outputs. However, the weights declared using SparseTensor aren't trained as training steps go.
As a workaround to your problem, you can provide a tf.Variable (until Tensorflow v0.8) for the values of a sparse tensor. The sparsity structure has to be pre-defined in that case, the weights however remain trainable.
weights = tf.Variable(<initial-value>)
sparse_var = tf.SparseTensor(<indices>, weights, <shape>) # v0.8
sparse_var = tf.SparseTensor(<indices>, tf.identity(weights), <shape>) # v0.9
TensorFlow doesn't currently support sparse tensor variables. However, it does support sparse lookups (tf.embedding_lookup) and sparse gradient updates (tf.sparse_add) of dense variables. I suspect these two will suffice your use case.
TensorFlow doesn't support training on sparse tensors yet. You can initialize a sparse tensor as you wish, then convert it into a dense tensor and create a variable from it like that:
# You need to correctly initialize the sparse tensor with indices, values and a shape
b = tf.SparseTensor(indices, values, shape)
b_dense = tf.sparse_tensor_to_dense(b)
b_variable = tf.Variable(b_dense)
Now you have initialized a sparse tensor as a variable. Now you need to take care of the gradient update (in other words, make sure the entries in the variable stay 0, since there is a non-vanishing gradient calculated in the backpropagation algorithm for them when using this naively).
In order to do this, TensorFlow optimizers have a method called tf.train.Optimizer.compute_gradients(loss, [list_of_variables]). This calculates all the gradients in the graph necessary to minimize the loss function, but doesn't apply them yet. This method returns a list of tuples in a form of (gradients, variable). You can modify these gradients freely, but in your case it makes sense to mask the gradients not needed to 0 (i.e. by creating another sparse tensor with default values 0.0 and values 1.0 where the weights in your network are present).
After having modified them, you call the optimizer method tf.train.Optimizer.apply_gradients(grads_and_vars) to actually apply the gradients. An example code would look like this:
# Create optimizer instance
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
# Get the gradients for your weights
grads_and_vars = optimizer.compute_gradients(loss, [b_variable])
# Modify the gradients at will
# In your case it would look similar to this
modified_grads_and_vars = [(tf.multiply(gv[0], mask_tensor), gv[1] for gv in grads_and_vars]
# Apply modified gradients to your model
optimizer.apply_gradients(modified_grads_and_vars)
This makes sure your entries stay 0 in your weight matrix and no unwanted connections are created. You need to take care of all the other gradients for all other variables later.
The above code works with some minor correction like this.
def optimize(loss, mask_tensor):
optimizer = tf.train.AdamOptimizer(0.001)
grads_and_vars = optimizer.compute_gradients(loss)
modified_grads_and_vars = [
(tf.multiply(gv[0], mask_tensor[gv[1]]), gv[1]) for gv in grads_and_vars
]
return optimizer.apply_gradients(modified_grads_and_vars)

How to fix the test and train set in svm using matlab?

I am new to matlab. I want to know how to fixed the train and test set in svm code because I had find a code, the code randomly selects the test and train set. my database is YMU database, how should I fix the train and test set using svm code. because I use the crossvalind to randomly select the train and test set. which variable should I change with the crossvalind?
%load YMU database
%NMC is non-makeup , MC is makeup
%testingset = non-makeup, trainingset is makeup
load TestingSetNMC.mat
load TrainingSetMC.mat
load gnd_Test.mat
load gnd_Train.mat
data1 = TrainingSet;
data2 = TestingSet;
groups1 = ismember(gnd_Train,'data1');
groups2 = ismember(gnd_Test,'data2');
%crossvalind is random choose
[train] = crossvalind('holdOut',groups1);
[test] = crossvalind('holdOut',groups2);
cp = classperf(groups1);
svmStruct = svmtrain(data1(train,:),groups1(train),'showplot',true);
classes = svmclassify(svmStruct,data2(test,:),'showplot',true);
classperf(cp,classes,test);
cp.CorrectRate
With (most) matlab functions that generate pseudo-random output you can control that output by explicitly specifying a random number generator's seed and method.
In your case, place the following line anywhere before you call crossvalind:
rng(1, 'twister');
This sets the seed to 1 and the method to Mersenne Twister. In the documentation for rng you will find a more detailed explanation about controlling pseudo-random output.

TensorFlow Training

Assuming I have a very simple neural network, like multilayer perceptron. For each layer the activation function is sigmoid and the network are fully connected.
In TensorFlow this might be defined like this:
sess = tf.InteractiveSession()
# Training Tensor
x = tf.placeholder(tf.float32, shape = [None, n_fft])
# Label Tensor
y_ = tf.placeholder(tf.float32, shape = [None, n_fft])
# Declaring variable buffer for weights W and bias b
# Layer structure [n_fft, n_fft, n_fft, n_fft]
# Input -> Layer 1
struct_w = [n_fft, n_fft]
struct_b = [n_fft]
W1 = weight_variable(struct_w, 'W1')
b1 = bias_variable(struct_b, 'b1')
h1 = tf.nn.sigmoid(tf.matmul(x, W1) + b1)
# Layer1 -> Layer 2
W2 = weight_variable(struct_w, 'W2')
b2 = bias_variable(struct_b, 'b2')
h2 = tf.nn.sigmoid(tf.matmul(h1, W2) + b2)
# Layer2 -> output
W3 = weight_variable(struct_w, 'W3')
b3 = bias_variable(struct_b, 'b3')
y = tf.nn.sigmoid(tf.matmul(h2, W3) + b3)
# Calculating difference between label and output using mean square error
mse = tf.reduce_mean(tf.square(y - y_))
# Train the Model
# Gradient Descent
train_step = tf.train.GradientDescentOptimizer(0.3).minimize(mse)
The design target for this model is to map a n_fft points fft spectrogram to another n_fft target spectrogram. Let's assume both the training data and target data are of size [3000, n_fft]. They are stored in variables spec_train and spec_target.
Now here comes the question. For TensorFlow is there any difference between these two trainings?
Training 1:
for i in xrange(200):
train_step.run(feed_dict = {x: spec_train, y_: spec_target})
Training 2:
for i in xrange(200):
for j in xrange(3000):
train = spec_train[j, :].reshape(1, n_fft)
label = spec_target[j, :].reshape(1, n_fft)
train_step.run(feed_dict = {x: train, y_: label})
Thank you very much!
In the first training version, you are training the entire batch of training data at once, which means that the first and the 3000th element of spec_train will be processed using the same model parameters in a single step. This is known as (Batch) Gradient Descent.
In the second training version, you are training a single example from the training data at once, which means that the 3000th element of spec_train will be processed using model parameters that have been updated 2999 times since the first element was most recently processed. This is known as Stochastic Gradient Descent (or it would be if the element was selected at random).
In general, TensorFlow is used with datasets that are too large to process in one batch, so mini-batch SGD (where a subset of the examples are processed in one step) is favored. Processing a single element at a time is theoretically desirable, but is inherently sequential and has high fixed costs because the matrix multiplications and other operations are not as computationally dense. Therefore, processing a small batch (e.g. 32 or 128) of examples at once is the usual approach, with multiple replicas training on different batches in parallel.
See this Stats StackExchange question for a more theoretical discussion of when you should use one approach versus the other.
Yes there's a difference. I think the second way loss function can be bit messy. It's more like online training. For each data point in the whole batch you update all of your parameters. But in the first way it's called the batch gradient where you take one batch at a time and take the average loss then update the parameters.
Please refer this link
https://stats.stackexchange.com/questions/49528/batch-gradient-descent-versus-stochastic-gradient-descent
First answer is really good in this link