Spatial reflection padding in Caffe - neural-network

Any ideas how to implement Spatial Reflection Padding in Caffe like in Torch?
(x): nn.SpatialReflectionPadding(l=1, r=1, t=1, b=1)
(x): nn.SpatialConvolution(64 -> 64, 3x3)
(x): nn.ReLU

One way to do this would be using the Python Layer of Caffe. You can then set the functions yourself and customize based on your needs. However, this layer can only run in the CPU, so it might slow down your model especially if you use it in the middle of the network.
In the following, I have defined a layer to zero pad input using the Python layer, which you can modify to suit your needs:
import caffe
import numpy as np
class SpatialReflectionPadding(caffe.Layer):
def setup(self,bottom,top):
if len(bottom) != 1: # check that a single bottom blob is given
raise Exception("Expected a single blob")
if len(bottom[0].shape) != 4: # check that it is 4D
raise Exception("Expected 4D blob")
params = eval(self.param_str) # get the params given in the prototxt
self.l = params["l"]
self.r = params["r"]
self.t = params["t"]
self.b = params["b"]
def reshape(self,bottom,top):
top[0].reshape(bottom[0].shape[0],bottom[0].shape[1],bottom[0].shape[2]+self.t+self.b,bottom[0].shape[3]+self.r+self.l) # set the shape of the top blob based on the shape of the existing bottom blob
def forward(self,bottom,top):
for i in range(0,top[0].shape[2]):
for j in range(0,top[0].shape[3]):
if (i < self.t or i >= self.t+bottom[0].shape[2]) or (j < self.l or j >= self.l+bottom[0].shape[3]):
top[0].data[:,:,i,j] = 0 # for the padded part, set the value to 0
else:
top[0].data[:,:,i,j] = bottom[0].data[:,:,i-self.t,j-self.l] # for the rest, copy the value from the bottom blob
def backward(self,top,propagate_down,bottom):
bottom[0].diff[...] = np.full(bottom[0].shape,1) * top[0].diff[:,:,self.t:self.t+bottom[0].shape[2],self.l:self.l+bottom[0].shape[3]] # set the gradient for backward pass
Then, in your prototxt file, you can use it as:
layer {
name: "srp" # some name
type: "Python"
bottom: "some_layer" # the layer which provides the input blob
top: "srp"
python_param {
module: "caffe_srp" # whatever is your module name
layer: "SpatialReflectionPadding"
param_str: '{ "l": 1, "b": 1, "t": 1, "r": 1}'
}
}
I am not 100% sure that it works correctly, though when I used it, it appeared to do so. In any case, it should give an idea and a starting point on how one could proceed. Also, you could refer to this question and its answers.

Related

Trying to use Distributed data parallel on GANs but getting runtime error about an inplace operation

I am trying to train a GAN a machine with 3GPUs using distributed data parallel.
before wrapping my model in the DDP everything works fine but when I wrap it, it givers me the following Runtime Error
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [128]] is at version 5; expected version 4 instead.
I cloned every related tensor to the gradient to solve the inplace operation (if it is any) but I could not find it.
the part of code with the problem is as follow:
Tensor = torch.cuda.FloatTensor
# ----------
# Training
# ----------
def train_gan(rank, world_size, opt):
print(f"Running basic DDP example on rank {rank}.")
setup(rank, world_size)
if rank == 0:
get_dataloader(rank, opt)
dist.barrier()
print(f"Rank {rank}/{world_size} training process passed data download barrier.\n")
dataloader = get_dataloader(rank, opt)
# Loss function
adversarial_loss = torch.nn.BCELoss()
# Initialize generator and discriminator
generator = Generator()
discriminator = Discriminator()
# Initialize weights
generator.apply(weights_init_normal)
discriminator.apply(weights_init_normal)
generator.to(rank)
discriminator.to(rank)
generator_d = DDP(generator, device_ids=[rank])
discriminator_d = DDP(discriminator, device_ids=[rank])
# Optimizers
# Since we are computing the average of several batches at once (an effective batch size of
# world_size * batch_size) we scale the learning rate to match.
optimizer_G = torch.optim.Adam(generator_d.parameters(), lr=opt.lr * opt.world_size, betas=(opt.b1, opt.b2))
optimizer_D = torch.optim.Adam(discriminator_d.parameters(), lr=opt.lr * opt.world_size, betas=(opt.b1, opt.b2))
losses = []
for epoch in range(opt.n_epochs):
for i, (imgs, _) in enumerate(dataloader):
# Adversarial ground truths
valid = Variable(Tensor(imgs.shape[0], 1).fill_(1.0), requires_grad=False).to(rank)
fake = Variable(Tensor(imgs.shape[0], 1).fill_(0.0), requires_grad=False).to(rank)
# Configure input
real_imgs = Variable(imgs.type(Tensor)).to(rank)
# -----------------
# Train Generator
# -----------------
optimizer_G.zero_grad()
# Sample noise as generator input
z = Variable(Tensor(np.random.normal(0, 1, (imgs.shape[0], opt.latent_dim)))).to(rank)
# Generate a batch of images
gen_imgs = generator_d(z)
# Loss measures generator's ability to fool the discriminator
g_loss = adversarial_loss(discriminator_d(gen_imgs), valid)
g_loss.backward()
optimizer_G.step()
# ---------------------
# Train Discriminator
# ---------------------
optimizer_D.zero_grad()
# Measure discriminator's ability to classify real from generated samples
real_loss = adversarial_loss(discriminator_d(real_imgs), valid)
fake_loss = adversarial_loss(discriminator_d(gen_imgs.detach()), fake)
d_loss = ((real_loss + fake_loss) / 2).to(rank)
d_loss.backward()
optimizer_D.step()
I encountered a similar error when trying to train a GAN with DistributedDataParallel.
I noticed the problem was coming from BatchNorm layers in my discriminator.
Indeed, DistributedDataParallel synchronizes the batchnorm parameters at each forward pass (see the doc), thereby modifying the variable inplace, which causes problems if you have multiple forward passes in a row.
Converting my BatchNorm layers to SyncBatchNorm did the trick for me:
discriminator = torch.nn.SyncBatchNorm.convert_sync_batchnorm(discriminator)
discriminator = DPP(discriminator)
You probably want to do it anyway when using DistributedDataParallel.
Alternatively, if you don't want to use SyncBatchNorm, you can set the broadcast_buffers parameter to False, but I don't think you really want to do that, as it means your batch norm stats will not be synchronized among processes.
discriminator = DPP(discriminator, device_ids=[rank], broadcast_buffers=False)

Pytorch minibatching keeps model from training

I am trying to classify sequences by a binary feature. I have a dataset of sequence/label pairs and am using a simple one-layer LSTM to classify each sequence. Before I implemented minibatching, I was getting reasonable accuracy on a test set (80%), and the training loss would go from 0.6 to 0.3 (averaged).
I implemented minibatching, using parts of this tutorial: https://pytorch.org/tutorials/beginner/chatbot_tutorial.html
However, now my model won’t do better than 70-72% (70% of the data has one label) with batch size set to 1 and all other parameters exactly the same. Additionally, the loss starts out at 0.0106 and quickly gets really really small, with no significant change in results. I feel like the results between no batching and batching with size 1 should be the same, so I probably have a bug, but for the life of me I can’t find it. My code is below.
Training code (one epoch):
for i in t:
model.zero_grad()
# prep inputs
last = i+self.params['batch_size']
last = last if last < len(train_data) else len(train_data)
batch_in, lengths, batch_targets = self.batch2TrainData(train_data[shuffled][i:last], word_to_ix, label_to_ix)
iters += 1
# forward pass.
tag_scores = model(batch_in, lengths)
# compute loss, then do backward pass, then update gradients
loss = loss_function(tag_scores, batch_targets)
loss.backward()
# Clip gradients: gradients are modified in place
nn.utils.clip_grad_norm_(model.parameters(), 50.0)
optimizer.step()
Functions:
def prep_sequence(self, seq, to_ix):
idxs = [to_ix[w] for w in seq]
return torch.tensor(idxs, dtype=torch.long)
# transposes batch_in
def zeroPadding(self, l, fillvalue=0):
return list(itertools.zip_longest(*l, fillvalue=fillvalue))
# Returns padded input sequence tensor and lengths
def inputVar(self, batch_in, word_to_ix):
idx_batch = [self.prep_sequence(seq, word_to_ix) for seq in batch_in]
lengths = torch.tensor([len(idxs) for idxs in idx_batch])
padList = self.zeroPadding(idx_batch)
padVar = torch.LongTensor(padList)
return padVar, lengths
# Returns all items for a given batch of pairs
def batch2TrainData(self, batch, word_to_ix, label_to_ix):
# sort by dec length
batch = batch[np.argsort([len(x['turn']) for x in batch])[::-1]]
input_batch, output_batch = [], []
for pair in batch:
input_batch.append(pair['turn'])
output_batch.append(pair['label'])
inp, lengths = self.inputVar(input_batch, word_to_ix)
output = self.prep_sequence(output_batch, label_to_ix)
return inp, lengths, output
Model:
class LSTMClassifier(nn.Module):
def __init__(self, params, vocab_size, tagset_size, weights_matrix=None):
super(LSTMClassifier, self).__init__()
self.hidden_dim = params['hidden_dim']
if weights_matrix is not None:
self.word_embeddings = nn.Embedding.from_pretrained(weights_matrix)
else:
self.word_embeddings = nn.Embedding(vocab_size, params['embedding_dim'])
self.lstm = nn.LSTM(params['embedding_dim'], self.hidden_dim, bidirectional=False)
# The linear layer that maps from hidden state space to tag space
self.hidden2tag = nn.Linear(self.hidden_dim, tagset_size)
def forward(self, batch_in, lengths):
embeds = self.word_embeddings(batch_in)
packed = nn.utils.rnn.pack_padded_sequence(embeds, lengths)
lstm_out, _ = self.lstm(packed)
outputs, _ = nn.utils.rnn.pad_packed_sequence(lstm_out)
tag_space = self.hidden2tag(outputs)
tag_scores = F.log_softmax(tag_space, dim=0)
return tag_scores[-1]
For anyone else with a similar issue, I got it to work. I removed the log_softmax calculation, so this:
tag_space = self.hidden2tag(outputs)
tag_scores = F.log_softmax(tag_space, dim=0)
return tag_scores[-1]
becomes this:
tag_space = self.hidden2tag(outputs)
return tag_space[-1]
I also changed NLLLoss to CrossEntropyLoss, (not shown above), and initialized CrossEntropyLoss with no parameters (aka no ignore_index).
I am not certain why these changes were necessary (the docs even say that NLLLoss should be run after a log_softmax layer), but they got my model working and brought my loss back to a reasonable range (~0.5).

caffe: convolution with a fix predifined kernel (filter)

Instead of having a learnable filter, I am interested in a convolution with a fix predefined matrix; for example sobel filter:
so, I set learning = 0 (so its fixed), and my kernel size = 3 as:
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param { lr_mult: 0 decay_mult: 0 }
convolution_param {
num_output: 10
kernel_size: 3 # filter is 3x3
stride: 2
weight_filler {
type: ??}
}
}
Now, I do not know how to give matrix information to the conv layer. Any ideas? I think it should go to weight_filler, but how?
One more question: num_output has to be same as bottom's (data channel = 10 here) channel size? can I set num_output another number? if yes, what will happen and what that means?
How to init weights to specific values?
You can use net_surgery to load your untrained/un-initialized net in python and then assign the specific weights you want to the filters, save the net, and use it with the weights you want for this specific layer.
How do set num_output and other conv_params?
This is a good question: You have an input blob of shape bx10xhxw and you want to apply a 3x3 filter to each channel and get back a new filtered bx10xhxw. If you just set num_output: 10, the shape of the filters would be 10x10x3x3, that is, 10 filters of shape 10x3x3 - which is not want you expect. You want a 3x3 filter.
To that end you need to look at group conv_param. Setting group: 10 together with num_output: 10 (assuming input c=10) will give you what you want, the weight shape will be 10x1x3x3.
In python caffe interface, caffe.Net object instatiated with loading .prototxt file, which defined the network architecture. You can use caffe.Net object with following properties for accessing various information on the network.
blob_loss_weights: An OrderedDict (bottom to top, i.e., input to output) of network blob loss weights indexed by layer name
blobs: An OrderedDict (bottom to top, i.e., input to output) of network blobs indexed by layer name
bottom_names: all bottom names in the network
inputs: inputs to this network
layer_dict: An OrderedDict (bottom to top, i.e., input to output) of network layers indexed by layer name
layers: caffe._caffe.LayerVec - list of whose element is caffe.Layer objects in the network, caffe.Layer classs has blobs field for layer's parameters memory and type for layer type (e.g, Convolution, Data, etc)
outputs: outputs from this network
params: An OrderedDict (bottom to top, i.e., input to output) of network parameters indexed by name; each is a list of multiple blobs (e.g., weights and biases)
top_names: all top names in the network
You can use caffe.Net.params for accessing layer's learnable parameters together with caffe.Net.layer_dict to access layer info.
caffe.Net.params is ordered dictionary where key is layer name and value is the blobs for parameters (e.g, weight and bias) and in case of Convolution layer, first element of blobs are weiht and second element of blobs is bias:
caffe.Net.params['layer_name'][0] : weight
caffe.Net.params['layer_name'][1] : bias
Please note that access to blob's memory should be done with caffe.Net.params['layer_name'][0].data and updating the blob's memory should be done with ... such as caffe.Net.params['layer_name'][0].data[...]
Following code illustrate the loading learnable parameter from numpy saved file (.npy):
def load_weights_and_biases(network):
k_list = list(network.params.keys())
suffix = ["weight", "bias"]
num_layers = len(network.layer_dict)
for idx, layer_name in enumerate(network.layer_dict):
print(f"\n-----------------------------")
print(f"layer index: {idx}/{num_layers}")
print(f"layer name: '{layer_name}''")
print(f"layer type: '{detection_nw.layers[idx].type}' ")
if layer_name in k_list:
params = network.params[layer_name]
print(f"{len(params)} learnable parameters in '{detection_nw.layers[idx].type}' type")
for i, p in enumerate(params):
#print(f"\tparams[{i}]: {p}")
#print(f"\tparams[{i}] CxHxW: {p.channels}x{p.height}x{p.width}")
print(f"\tp[{i}]: {p.data.shape} of {p.data.dtype}")
param_file_path = f"./npy_save/{layer_name}_{suffix[i]}.npy"
param_file = Path(param_file_path)
if param_file.exists():
print(f"\tload {param_file_path}")
arr = np.load(param_file_path, allow_pickle=True)
if p.data.shape == arr.shape:
print(f"\tset {layer_name}_{suffix[i]} with arr:shape {arr.shape}, type {arr.dtype}")
p.data[...] = arr
else:
print(f"p.data.shape: {p.data.shape} is not equal to arr.shape: {arr.shape}")
break
else:
print(f"{param_file_path} is not exits!!")
break
else:
print(f"no learnable parameters in '{layer_name}' of '{network.layers[idx].type}' type'")
Blob type is defined as caffe._caffe.Blob in python caffe (aka pycaffe) interface. Use help(caffe._caffe.Blob) after import caffe and names described in data descriptors defined here section of help output as attribute.
For more detaild info on Blob in Caffe reference
Blobs, Layers, and Nets: anatomy of a Caffe model - caffe documentations
caffe::Blob Class Template Reference - C++ source for Blob class

Deconv implementation in keras output_shape issue

I am implementing following Colorization Model written in Caffe. I am confused about my output_shape parameter to supply in Keras
model.add(Deconvolution2D(256,4,4,border_mode='same',
output_shape=(None,3,14,14),subsample=(2,2),dim_ordering='th',name='deconv_8.1'))
I have added a dummy output_shape parameter. But how can I determine the output parameter? In caffe model the layer is defined as:
layer {
name: "conv8_1"
type: "Deconvolution"
bottom: "conv7_3norm"
top: "conv8_1"
convolution_param {
num_output: 256
kernel_size: 4
pad: 1
dilation: 1
stride: 2
}
If I do not supply this parameter the code give parameter error but I can not understand what should I supply as output_shape
p.s. already asked on data science forum page with no response. may be due to small user base
What output shape does the Caffe deconvolution layer produce?
For this colorization model in particular you can simply refer to page 24 of their paper (which is linked in their GitHub page):
So basically the output shape of this deconvolution layer in the original model is [None, 56, 56, 128]. This is what you want to pass to Keras as output_shape. The only problem is as I mention in the section below, Keras doesn't really use this parameter to determine the output shape, so you need to run a dummy prediction to find what your other parameters need to be in order for you to get what you want.
More generally the Caffe source code for computing its Deconvolution layer output shape is:
const int kernel_extent = dilation_data[i] * (kernel_shape_data[i] - 1) + 1;
const int output_dim = stride_data[i] * (input_dim - 1)
+ kernel_extent - 2 * pad_data[i];
Which with a dilation argument equal to 1 reduces to just:
const int output_dim = stride_data[i] * (input_dim - 1)
+ kernel_shape_data[i] - 2 * pad_data[i];
Note that this matches the Keras documentation when the parameter a is zero:
Formula for calculation of the output shape 3, 4: o = s (i - 1) +
a + k - 2p
How to verify actual output shape with your Keras backend
This is tricky, because the actual output shape depends on the backend implementation and configuration. Keras is currently unable to find it on its own. So you actually have to execute a prediction on some dummy input to find the actual output shape. Here's an example of how to do this from the Keras docs for Deconvolution2D:
To pass the correct `output_shape` to this layer,
one could use a test model to predict and observe the actual output shape.
# Examples
```python
# apply a 3x3 transposed convolution with stride 1x1 and 3 output filters on a 12x12 image:
model = Sequential()
model.add(Deconvolution2D(3, 3, 3, output_shape=(None, 3, 14, 14), border_mode='valid', input_shape=(3, 12, 12)))
# Note that you will have to change the output_shape depending on the backend used.
# we can predict with the model and print the shape of the array.
dummy_input = np.ones((32, 3, 12, 12))
# For TensorFlow dummy_input = np.ones((32, 12, 12, 3))
preds = model.predict(dummy_input)
print(preds.shape)
# Theano GPU: (None, 3, 13, 13)
# Theano CPU: (None, 3, 14, 14)
# TensorFlow: (None, 14, 14, 3)
Reference: https://github.com/fchollet/keras/blob/master/keras/layers/convolutional.py#L507
Also you might be curious to know why is it that the output_shape parameter apparently doesn't really define the output shape. According to the post Deconvolution2D layer in keras this is why:
Back to Keras and how the above is implemented. Confusingly, the output_shape parameter is actually not used for determining the output shape of the layer, and instead they try to deduce it from the input, the kernel size and the stride, while assuming only valid output_shapes are supplied (though it's not checked in the code to be the case). The output_shape itself is only used as input to the backprop step. Thus, you must also specify the stride parameter (subsample in Keras) in order to get the desired result (which could've been determined by Keras from the given input shape, output shape and kernel size).

Test labels for regression caffe, float not allowed?

I am doing regression using caffe, and my test.txt and train.txt files are like this:
/home/foo/caffe/data/finetune/flickr/3860781056.jpg 2.0
/home/foo/caffe/data/finetune/flickr/4559004485.jpg 3.6
/home/foo/caffe/data/finetune/flickr/3208038920.jpg 3.2
/home/foo/caffe/data/finetune/flickr/6170430622.jpg 4.0
/home/foo/caffe/data/finetune/flickr/7508671542.jpg 2.7272
My problem is it seems caffe does not allow float labels like 2.0, when I use float labels while reading, for example the 'test.txt' file caffe only
recognizes
a total of 1 images
which is wrong.
But when I for example change the 2.0 to 2 in the file and the following lines same, caffe now gives
a total of 2 images
implying that the float labels are responsible for the problem.
Can anyone help me here, to solve this problem, I definitely need to use float labels for regression, so does anyone know about a work around or solution for this? Thanks in advance.
EDIT
For anyone facing a similar issue use caffe to train Lenet with CSV data might be of help. Thanks to #Shai.
When using the image dataset input layer (with either lmdb or leveldb backend) caffe only supports one integer label per input image.
If you want to do regression, and use floating point labels, you should try and use the HDF5 data layer. See for example this question.
In python you can use h5py package to create hdf5 files.
import h5py, os
import caffe
import numpy as np
SIZE = 224 # fixed size to all images
with open( 'train.txt', 'r' ) as T :
lines = T.readlines()
# If you do not have enough memory split data into
# multiple batches and generate multiple separate h5 files
X = np.zeros( (len(lines), 3, SIZE, SIZE), dtype='f4' )
y = np.zeros( (len(lines),1), dtype='f4' )
for i,l in enumerate(lines):
sp = l.split(' ')
img = caffe.io.load_image( sp[0] )
img = caffe.io.resize( img, (SIZE, SIZE, 3) ) # resize to fixed size
# you may apply other input transformations here...
# Note that the transformation should take img from size-by-size-by-3 and transpose it to 3-by-size-by-size
# for example
# transposed_img = img.transpose((2,0,1))[::-1,:,:] # RGB->BGR
X[i] = transposed_img
y[i] = float(sp[1])
with h5py.File('train.h5','w') as H:
H.create_dataset( 'X', data=X ) # note the name X given to the dataset!
H.create_dataset( 'y', data=y ) # note the name y given to the dataset!
with open('train_h5_list.txt','w') as L:
L.write( 'train.h5' ) # list all h5 files you are going to use
Once you have all h5 files and the corresponding test files listing them you can add an HDF5 input layer to your train_val.prototxt:
layer {
type: "HDF5Data"
top: "X" # same name as given in create_dataset!
top: "y"
hdf5_data_param {
source: "train_h5_list.txt" # do not give the h5 files directly, but the list.
batch_size: 32
}
include { phase:TRAIN }
}
Clarification:
When I say "caffe only supports one integer label per input image" I do not mean that the leveldb/lmdb containers are limited, I meant the tools of caffe, specifically the convert_imageset tool.
At closer inspection, it seems like caffe stores data of type Datum in leveldb/lmdb and the "label" property of this type is defined as integer (see caffe.proto) thus when using caffe interface to leveldb/lmdb you are restricted to a single int32 label per image.
Shai's answer already covers saving float labels to HDF5 format. In case LMDB is required/preferred, here's a snippet on how to create an LMDB from float data (adapted from this github comment):
import lmdb
import caffe
def scalars_to_lmdb(scalars, path_dst):
db = lmdb.open(path_dst, map_size=int(1e12))
with db.begin(write=True) as in_txn:
for idx, x in enumerate(scalars):
content_field = np.array([x])
# get shape (1,1,1)
content_field = np.expand_dims(content_field, axis=0)
content_field = np.expand_dims(content_field, axis=0)
content_field = content_field.astype(float)
dat = caffe.io.array_to_datum(content_field)
in_txn.put('{:0>10d}'.format(idx) dat.SerializeToString())
db.close()
I ended up transposing, switching the channel order, and using unsigned ints rather than floats to get results. I suggest reading an image back from your HDF5 file to make sure it displays correctly.
First read the image as unsigned ints:
img = np.array(Image.open('images/' + image_name))
Then change the channel order from RGB to BGR:
img = img[:, :, ::-1]
Finally, switch from Height x Width x Channels to Channels x Height x Width:
img = img.transpose((2, 0, 1))
Merely changing the shape will scramble your image and ruin your data!
To read back the image:
with h5py.File(h5_filename, 'r') as hf:
images_test = hf.get('images')
targets_test = hf.get('targets')
for i, img in enumerate(images_test):
print(targets_test[i])
from skimage.viewer import ImageViewer
viewer = ImageViewer(img.reshape(SIZE, SIZE, 3))
viewer.show()
Here's a script I wrote which deals with two labels (steer and speed) for a self-driving car task: https://gist.github.com/crizCraig/aa46105d34349543582b177ae79f32f0
Besides #Shai's answer above, I wrote a MultiTaskData layer supporting float typed labels.
Its main idea is to store the labels in float_data field of Datum, and the MultiTaskDataLayer will parse them as labels for any number of tasks according to the value of task_num and label_dimension set in net.prototxt. The related files include: caffe.proto, multitask_data_layer.hpp/cpp, io.hpp/cpp.
You can easily add this layer to your own caffe and use it like this (this is an example for face expression label distribution learning task in which the "exp_label" can be float typed vectors such as [0.1, 0.1, 0.5, 0.2, 0.1] representing face expressions(5 class)'s probability distribution.):
name: "xxxNet"
layer {
name: "xxx"
type: "MultiTaskData"
top: "data"
top: "exp_label"
data_param {
source: "expression_ld_train_leveldb"
batch_size: 60
task_num: 1
label_dimension: 8
}
transform_param {
scale: 0.00390625
crop_size: 60
mirror: true
}
include:{ phase: TRAIN }
}
layer {
name: "exp_prob"
type: "InnerProduct"
bottom: "data"
top: "exp_prob"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 8
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "exp_loss"
type: "EuclideanLoss"
bottom: "exp_prob"
bottom: "exp_label"
top: "exp_loss"
include:{ phase: TRAIN }
}