How to properly add and use BatchNormLayer? - neural-network

Introduction
According to the lasagne docs :
"This layer should be inserted between a linear transformation (such as a DenseLayer, or Conv2DLayer) and its nonlinearity. The convenience function batch_norm() modifies an existing layer to insert batch normalization in front of its nonlinearity."
However lasagne also have the utility function :
lasagne.layers.batch_norm
However, due to implementation on my end, i cant use that function.
My Question is : How and Where should i add the BatchNormLayer?
class lasagne.layers.BatchNormLayer(incoming, axes='auto', epsilon=1e-4, alpha=0.1, beta=lasagne.init.Constant(0), gamma=lasagne.init.Constant(1), mean=lasagne.init.Constant(0), inv_std=lasagne.init.Constant(1), **kwargs)
Can i add it after a convolution layer? or should i add after the maxpool?
Do i have to manually remove the bias of the layers?
Approach used
I have used it like this, only, :
try:
import lasagne
import theano
import theano.tensor as T
input_var = T.tensor4('inputs')
target_var = T.fmatrix('targets')
network = lasagne.layers.InputLayer(shape=(None, 1, height, width), input_var=input_var)
from lasagne.layers import BatchNormLayer
network = BatchNormLayer(network,
axes='auto',
epsilon=1e-4,
alpha=0.1,
beta=lasagne.init.Constant(0),
gamma=lasagne.init.Constant(1),
mean=lasagne.init.Constant(0),
inv_std=lasagne.init.Constant(1))
network = lasagne.layers.Conv2DLayer(
network, num_filters=60, filter_size=(3, 3), stride=1, pad=2,
nonlinearity=lasagne.nonlinearities.rectify,
W=lasagne.init.GlorotUniform())
network = lasagne.layers.Conv2DLayer(
network, num_filters=60, filter_size=(3, 3), stride=1, pad=1,
nonlinearity=lasagne.nonlinearities.rectify,
W=lasagne.init.GlorotUniform())
network = lasagne.layers.MaxPool2DLayer(incoming=network, pool_size=(2, 2), stride=None, pad=(0, 0),
ignore_border=True)
network = lasagne.layers.DenseLayer(
lasagne.layers.dropout(network, p=0.5),
num_units=32,
nonlinearity=lasagne.nonlinearities.rectify)
network = lasagne.layers.DenseLayer(
lasagne.layers.dropout(network, p=0.5),
num_units=1,
nonlinearity=lasagne.nonlinearities.sigmoid)
return network, input_var, target_var
References:
https://github.com/Lasagne/Lasagne/blob/master/lasagne/layers/normalization.py#L120-L320
http://lasagne.readthedocs.io/en/latest/modules/layers/normalization.html

If not using batch_norm:
BatchNormLayer should be added after the dense or convolution layer, before the nonlinearity.
Maxpool is a non-linear downsampling which will keep the highest values on that layer. The sampled values will be normalized if you added BatchNormLayer after or convolution/dense layer.
If not using batch_norm, do remove the bias manually as it's redundant.
Please test the code below and let us know if it works for what you are trying to accomplish.
If it does not work, you can try adapting the batch_norm code.
import lasagne
import theano
import theano.tensor as T
from lasagne.layers import batch_norm
input_var = T.tensor4('inputs')
target_var = T.fmatrix('targets')
network = lasagne.layers.InputLayer(shape=(None, 1, height, width), input_var=input_var)
network = lasagne.layers.Conv2DLayer(
network, num_filters=60, filter_size=(3, 3), stride=1, pad=2,
nonlinearity=lasagne.nonlinearities.rectify,
W=lasagne.init.GlorotUniform())
network = batch_norm(network)
network = lasagne.layers.Conv2DLayer(
network, num_filters=60, filter_size=(3, 3), stride=1, pad=1,
nonlinearity=lasagne.nonlinearities.rectify,
W=lasagne.init.GlorotUniform())
network = batch_norm(network)
network = lasagne.layers.MaxPool2DLayer(incoming=network, pool_size=(2, 2), stride=None, pad=(0, 0),
ignore_border=True)
network = lasagne.layers.DenseLayer(
lasagne.layers.dropout(network, p=0.5),
num_units=32,
nonlinearity=lasagne.nonlinearities.rectify)
network = batch_norm(network)
network = lasagne.layers.DenseLayer(
lasagne.layers.dropout(network, p=0.5),
num_units=1,
nonlinearity=lasagne.nonlinearities.sigmoid)
network = batch_norm(network)
When getting the params to create the graph for you update method, remember to set trainable to True:
params = lasagne.layers.get_all_params(l_out, trainable=True)
updates = lasagne.updates.adadelta($YOUR_LOSS_HERE, params)`

Related

Running blenderbot-3B model locally does not provide same result as on Inference API

I tried the facebook/blenderbot-3B model using the Hosted Inference API and it works pretty well (https://huggingface.co/facebook/blenderbot-3B). Now I tried to use it locally with the Python script shown below. The created responses are much worse than from the inference API and do not make sense most of the time.
Is a different code used for the inference API or did I make a mistake?
from transformers import TFAutoModelForCausalLM, AutoTokenizer, BlenderbotTokenizer, TFBlenderbotForConditionalGeneration, TFT5ForConditionalGeneration, BlenderbotTokenizer, BlenderbotForConditionalGeneration
import tensorflow as tf
import torch
device = "cuda:0" if torch.cuda.is_available() else "cpu"
chat_bots = {
'BlenderBot': [BlenderbotTokenizer.from_pretrained("hyunwoongko/blenderbot-9B"), BlenderbotForConditionalGeneration.from_pretrained("hyunwoongko/blenderbot-9B").to(device)],
}
key = 'BlenderBot'
tokenizer, model = chat_bots[key]
for step in range(100):
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt').to(device)
if step > 0:
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1)
else:
bot_input_ids = new_user_input_ids
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id).to(device)
print("Bot: ", tokenizer.batch_decode(chat_history_ids, skip_special_tokens=True)[0])

Dash app connections to AWS postgres DB VERY SLOW

I've created a live-updating dash app connected to a public facing AWS Postgres database. I've put db connection within my callback so it updates, but I find that it takes a long long time to retrieve data and create the graph, such that if the interval time is reduced to 10 seconds or less, no graph loads at all. I've tried to store the data in dcc.store but the initial load still takes a very long time. My abbreviated code is written below. I'm assuming the lag time is from the engine connecting to the database, because I am only reading a few rows and columns. Is there anyway to speed this up?
import plotly.graph_objs as go
import dash
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output, State
from plotly.subplots import make_subplots
from sqlalchemy import create_engine, MetaData, Table
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import declarative_base
from sqlalchemy import Column, Integer, String, func, Date, ARRAY
from sqlalchemy.orm import sessionmaker
app = dash.Dash(__name__, external_stylesheets=[BS], suppress_callback_exceptions=True, update_title=None)
server=app.server
app.layout = html.Div([
dcc.Store(id='time', storage_type='session'),
dcc.Store(id='blood_pressure', storage_type='session'),
html.Div(dcc.Graph(id='live-graph', animate=False), className='w-100'),
html.Div(id= "testing"),
dcc.Interval(
id='graph-update-BP',
interval=30000,
n_intervals=0
)]), width={"size": 10, "offset": 0.5}),
#app.callback(
dash.dependencies.Output('live-graph', 'figure'),
dash.dependencies.Output('blood_pressure', 'data'),
dash.dependencies.Output('time', 'data'),
[dash.dependencies.Input('graph-update-BP', 'n_intervals')],
Input('live-graph', 'relayoutData'),
)
def update_graph_scatter_1(n):
trace = []
blood_pressure = []
time = []
engine = create_engine("postgresql://username:password#address:5432/xxxxx", echo=True, future=True)
Session = sessionmaker(bind=engine)
session = Session()
Base = automap_base()
Base.prepare(engine, reflect=True)
User = Base.classes.users
Datex = Base.classes.data
for instance in session.query(Datex).filter(Datex.user_id == 3).filter(Datex.date_time == 'Monday,Apr:26'):
blood_pressure.append([instance.systolic, instance.mean, instance.diastolic])
time.append(instance.time)
for i in range(0, len(blood_pressure)):
trace.append(go.Box(y=blood_pressure[i],
x=time[i],
line=dict(color='#6a92ff'),
hoverinfo='all'))
fig = make_subplots(rows=1, cols=1)
def append_trace():
for i in range(0, len(trace)):
fig.append_trace(trace[i], 1, 1)
append_trace()
return fig, blood_pressure, hr,
You can increase performance in your app in the following ways:
Non-programming methods:
If your app is deployed on AWS, ensure your app is connecting to your database over private IP. This reduces the number of networks your data has to traverse and will result in significantly lower latency.
Ensure your virtual machine has enough RAM. (If you're loading 2GB of data to a machine with 1GB available RAM, you're going to see the IO hit disk before loading to your program.)
Programming methods:
Modularize connecting to your database, and only do it once. This decreases the overhead required to reserve resources and authenticate connecting to the database
import os
class DbConnection:
"""Use this class to connect to your database within a dashapp"""
def __init__(self, **kwargs):
self.DB_URI = os.environ.get('DB_URI', kwargs.get('DB_URI'))
self.echo = kwargs.get('echo', True)
self.future = kwargs.get('future', True)
# Now create the engine
self.engine = create_engine(self.DB_URI, echo=self.echo, future=self.self)
# Make the session maker
self.session_maker = sessionmaker(bind=self.engine)
#property
def session(self):
"""Return a session as a property"""
return self.session_maker()
# -------------------------------------------
# In your app, instantiate the database connection
# and map your base
my_db_connection = DbConnection() # provide kwargs as needed
session = my_db_connection.session # necessary to assign property to a variable
# Map the classes
Base = automap_base()
Base.prepare(my_db_connection.engine, reflect=True)
User = Base.classes.users
Datex = Base.classes.data
Cache frequently queried data. Unless your data is massive and dramatically varying, you should expect better performance from loading the data from disk (or RAM) on your machine, than over the network from your database.
from functools import lru_cache
#lru_cache()
def get_blood_pressure(session, user_id, date):
"""returns blood pressure for a given user for a given date"""
blood_pressure, time = [], []
query = session.query(Datex)\
.filter(Datex.user_id == 3)\
.filter(Datex.date_time == 'Monday,Apr:26')
# I like short variable names when interacting with db results
for rec in query:
time.append(rec.time)
blood_pressure.append([rec.systolic, rec.mean, rec.diastolic])
# finally
return blood_pressure, time
Putting them all together, your callback should be a lot quicker
def update_graph_scatter_1(n):
# I'm not sure how these variables will be assigned
# but you'll figure it out
blood_pressure, time = get_blood_pressure(session=session, user_id=user_id, date='Monday,Apr:26')
# Create new traces
for i in range(0, len(blood_pressure)):
trace.append(go.Box(
y=blood_pressure[i],
x=time[i],
line=dict(color='#6a92ff'),
hoverinfo='all'
))
# Add to subplots
fig = make_subplots(rows=1, cols=1)
for i in range(0, len(trace)):
fig.append_trace(trace[i], 1, 1)
return fig, blood_pressure, time
Lastly, it looks like you're recreating your graph objects each update. This is a heavy operation. I'd recommend updating the graph's data instead. I know this is possible, since I've done this in the past. But it looks like the solution is not-trivial, unfortunately. Perhaps an item for a later response or follow up Q.
Further reading: https://dash.plotly.com/performance

Deploying Keras model to Google Cloud ML for serving predictions

I need to understand how to deploy models on Google Cloud ML. My first task is to deploy a very simple text classifier on the service. I do it in the following steps (could perhaps be shortened to fewer steps, if so, feel free to let me know):
Define the model using Keras and export to YAML
Load up YAML and export as a Tensorflow SavedModel
Upload model to Google Cloud Storage
Deploy model from storage to Google Cloud ML
Set the upload model version as default on the models website.
Run model with a sample input
I've finally made step 1-5 work, but now I get this strange error seen below when running the model. Can anyone help? Details on the steps is below. Hopefully, it can also help others that are stuck on one of the previous steps. My model works fine locally.
I've seen Deploying Keras Models via Google Cloud ML and Export a basic Tensorflow model to Google Cloud ML, but they seem to be stuck on other steps of the process.
Error
Prediction failed: Exception during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="In[0] is not a matrix
[[Node: MatMul = MatMul[T=DT_FLOAT, _output_shapes=[[-1,64]], transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/cpu:0"](Mean, softmax_W/read)]]")
Step 1
# import necessary classes from Keras..
model_input = Input(shape=(maxlen,), dtype='int32')
embed = Embedding(input_dim=nb_tokens,
output_dim=256,
mask_zero=False,
input_length=maxlen,
name='embedding')
x = embed(model_input)
x = GlobalAveragePooling1D()(x)
outputs = [Dense(nb_classes, activation='softmax', name='softmax')(x)]
model = Model(input=[model_input], output=outputs, name="fasttext")
# export to YAML..
Step 2
from __future__ import print_function
import sys
import os
import tensorflow as tf
from tensorflow.contrib.session_bundle import exporter
import keras
from keras import backend as K
from keras.models import model_from_config, model_from_yaml
from optparse import OptionParser
EXPORT_VERSION = 1 # for us to keep track of different model versions (integer)
def export_model(model_def, model_weights, export_path):
with tf.Session() as sess:
init_op = tf.global_variables_initializer()
sess.run(init_op)
K.set_learning_phase(0) # all new operations will be in test mode from now on
yaml_file = open(model_def, 'r')
yaml_string = yaml_file.read()
yaml_file.close()
model = model_from_yaml(yaml_string)
# force initialization
model.compile(loss='categorical_crossentropy',
optimizer='adam')
Wsave = model.get_weights()
model.set_weights(Wsave)
# weights are not loaded as I'm just testing, not really deploying
# model.load_weights(model_weights)
print(model.input)
print(model.output)
pred_node_names = output_node_names = 'Softmax:0'
num_output = 1
export_path_base = export_path
export_path = os.path.join(
tf.compat.as_bytes(export_path_base),
tf.compat.as_bytes('initial'))
builder = tf.saved_model.builder.SavedModelBuilder(export_path)
# Build the signature_def_map.
x = model.input
y = model.output
values, indices = tf.nn.top_k(y, 5)
table = tf.contrib.lookup.index_to_string_table_from_tensor(tf.constant([str(i) for i in xrange(5)]))
prediction_classes = table.lookup(tf.to_int64(indices))
classification_inputs = tf.saved_model.utils.build_tensor_info(model.input)
classification_outputs_classes = tf.saved_model.utils.build_tensor_info(prediction_classes)
classification_outputs_scores = tf.saved_model.utils.build_tensor_info(values)
classification_signature = (
tf.saved_model.signature_def_utils.build_signature_def(inputs={tf.saved_model.signature_constants.CLASSIFY_INPUTS: classification_inputs},
outputs={tf.saved_model.signature_constants.CLASSIFY_OUTPUT_CLASSES: classification_outputs_classes, tf.saved_model.signature_constants.CLASSIFY_OUTPUT_SCORES: classification_outputs_scores},
method_name=tf.saved_model.signature_constants.CLASSIFY_METHOD_NAME))
tensor_info_x = tf.saved_model.utils.build_tensor_info(x)
tensor_info_y = tf.saved_model.utils.build_tensor_info(y)
prediction_signature = (tf.saved_model.signature_def_utils.build_signature_def(
inputs={'images': tensor_info_x},
outputs={'scores': tensor_info_y},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={'predict_images': prediction_signature,
tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: classification_signature,},
legacy_init_op=legacy_init_op)
builder.save()
print('Done exporting!')
raise SystemExit
if __name__ == '__main__':
usage = "usage: %prog [options] arg"
parser = OptionParser(usage)
(options, args) = parser.parse_args()
if len(args) < 3:
raise ValueError("Too few arguments!")
model_def = args[0]
model_weights = args[1]
export_path = args[2]
export_model(model_def, model_weights, export_path)
Step 3
gsutil cp -r fasttext_cloud/ gs://quiet-notch-xyz.appspot.com
Step 4
from __future__ import print_function
from oauth2client.client import GoogleCredentials
from googleapiclient import discovery
from googleapiclient import errors
import time
projectID = 'projects/{}'.format('quiet-notch-xyz')
modelName = 'fasttext'
modelID = '{}/models/{}'.format(projectID, modelName)
versionName = 'Initial'
versionDescription = 'Initial release.'
trainedModelLocation = 'gs://quiet-notch-xyz.appspot.com/fasttext/'
credentials = GoogleCredentials.get_application_default()
ml = discovery.build('ml', 'v1', credentials=credentials)
# Create a dictionary with the fields from the request body.
requestDict = {'name': modelName, 'description': 'Online predictions.'}
# Create a request to call projects.models.create.
request = ml.projects().models().create(parent=projectID, body=requestDict)
# Make the call.
try:
response = request.execute()
except errors.HttpError as err:
# Something went wrong, print out some information.
print('There was an error creating the model.' +
' Check the details:')
print(err._get_reason())
# Clear the response for next time.
response = None
raise
time.sleep(10)
requestDict = {'name': versionName,
'description': versionDescription,
'deploymentUri': trainedModelLocation}
# Create a request to call projects.models.versions.create
request = ml.projects().models().versions().create(parent=modelID,
body=requestDict)
# Make the call.
try:
print("Creating model setup..", end=' ')
response = request.execute()
# Get the operation name.
operationID = response['name']
print('Done.')
except errors.HttpError as err:
# Something went wrong, print out some information.
print('There was an error creating the version.' +
' Check the details:')
print(err._get_reason())
raise
done = False
request = ml.projects().operations().get(name=operationID)
print("Adding model from storage..", end=' ')
while (not done):
response = None
# Wait for 10000 milliseconds.
time.sleep(10)
# Make the next call.
try:
response = request.execute()
# Check for finish.
done = True # response.get('done', False)
except errors.HttpError as err:
# Something went wrong, print out some information.
print('There was an error getting the operation.' +
'Check the details:')
print(err._get_reason())
done = True
raise
print("Done.")
Step 5
Use website.
Step 6
def predict_json(instances, project='quiet-notch-xyz', model='fasttext', version=None):
"""Send json data to a deployed model for prediction.
Args:
project (str): project where the Cloud ML Engine Model is deployed.
model (str): model name.
instances ([Mapping[str: Any]]): Keys should be the names of Tensors
your deployed model expects as inputs. Values should be datatypes
convertible to Tensors, or (potentially nested) lists of datatypes
convertible to tensors.
version: str, version of the model to target.
Returns:
Mapping[str: any]: dictionary of prediction results defined by the
model.
"""
# Create the ML Engine service object.
# To authenticate set the environment variable
# GOOGLE_APPLICATION_CREDENTIALS=<path_to_service_account_file>
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}'.format(project, model)
if version is not None:
name += '/versions/{}'.format(version)
response = service.projects().predict(
name=name,
body={'instances': instances}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
return response['predictions']
Then run function with test input: predict_json({'inputs':[[18, 87, 13, 589, 0]]})
There is now a sample demonstrating the use of Keras on CloudML engine, including prediction. You can find the sample here:
https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/census/keras
I would suggest comparing your code to that code.
Some additional suggestions that will still be relevant:
CloudML Engine currently only supports using a single signature (the default signature). Looking at your code, I think prediction_signature is more likely to lead to success, but you haven't made that the default signature. I suggest the following:
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: prediction_signature,},
legacy_init_op=legacy_init_op)
If you are deploying to the service, then you would invoke prediction like so:
predict_json({'images':[[18, 87, 13, 589, 0]]})
If you are testing locally using gcloud ml-engine local predict --json-instances the input data is slightly different (matches that of the batch prediction service). Each newline-separated line looks like this (showing a file with two lines):
{'images':[[18, 87, 13, 589, 0]]}
{'images':[[21, 85, 13, 100, 1]]}
I don't actually know enough about the shape of model.x to ensure the data being sent is correct for your model.
By way of explanation, it may be insightful to consider the difference between the Classification and Prediction methods in SavedModel. One difference is that, when using tensorflow_serving, which is based on gRPC, which is strongly typed, Classification provides a strongly-typed signature that most classifiers can use. Then you can reuse the same client on any classifier.
That's not overly useful when using JSON since JSON isn't strongly typed.
One other difference is that, when using tensorflow_serving, Prediction accepts column-based inputs (a map from feature name to every value for that feature in the whole batch) whereas Classification accepts row based inputs (each input instance/example is a row).
CloudML abstracts that away a bit and always requires row-based inputs (a list of instances). We even though we only officially support Prediction, but Classification should work as well.

OMNET++ wireless communication client-server

It's the first time I'm using OMNET++.
I want to start from the very basic stuff to understand how it works.
I successuffly created my first simulation with two hosts continuosly exhanging mesagges (from tictoc example).
What I would like to do now is to simulate a simple client-server wireless communication between one AP and one wireless node. I'm trying to do that using elements from the inet class but I'm stuck and it is not working.
import inet.networklayer.configurator.base.NetworkConfiguratorBase;
import inet.networklayer.configurator.ipv4.IPv4NetworkConfigurator;
import inet.networklayer.configurator.ipv4.IPv4NodeConfigurator;
import inet.node.inet.WirelessHost;
import inet.node.wireless.AccessPoint;
import inet.physicallayer.common.packetlevel.RadioMedium;
import inet.physicallayer.contract.packetlevel.IRadioMedium;
import inet.physicallayer.ieee80211.packetlevel.Ieee80211RadioMedium;
//
// TODO documentation
//
network net
{
string mediumType = default("IdealRadioMedium");
#display("bgb=620,426");
submodules:
wirelessHost1: WirelessHost {
#display("p=423,164");
}
accessPoint1: AccessPoint {
#display("p=147,197");
}
iRadioMedium: <mediumType> like IRadioMedium {
#display("p=523,302");
}
iPv4NetworkConfigurator: IPv4NetworkConfigurator {
#display("p=270,324");
assignDisjunctSubnetAddresses = false;
}
}
and then I created a wirelessHost.cc source file using the tictoc beahviour to make the two nodes communicate.
But it is not working, I get this error:
<!> Error in module (inet::IPv4NodeConfigurator) infrastructure.wirelessHost1.networkLayer.configurator (id=13) during network initialization: Configurator module 'configurator' not found (check the 'networkConfiguratorModule' parameter).
But before doing something, it was another error about the Access Point (could not find wlan[0] module).
Can someone help me to understand how to configure this model?
EDIT
Here's the configuration .ini file
[General]
network = infrastructure
#cmdenv-output-file = omnetpp.log
#debug-on-errors = true
tkenv-plugin-path = ../../../etc/plugins
#record-eventlog = true
**.constraintAreaMinX = 0m
**.constraintAreaMinY = 0m
**.constraintAreaMinZ = 0m
**.constraintAreaMaxX = 600m
**.constraintAreaMaxY = 500m
**.constraintAreaMaxZ = 0m
**.mobility.typename = "StationaryMobility"
**.mobilityType = "StationaryMobility"
# access point
*.accessPoint.wlan[0].mac.address = "004444444444"
*.accessPoint.wlan[0].radio.channelNumber = 11
# host1 is associated with AP1 on channel 0
**.wirelessHost1.wlan[0].mgmt.accessPointAddress = "004444444444"
*.wirelessHost1.**.mgmtType = "Ieee80211MgmtSTASimplified"
# global data rates
**.wlan*.bitrate = 11Mbps
# application level: host1 pings host2
**.numPingApps = 1
*.wirelessHost1.pingApp[0].destAddr = "accessPoint"
*.wirelessHost1.pingApp[0].sendInterval = 10ms
but running the simulation I get
<!> Error in module (inet::ICMP) infrastructure.wirelessHost1.networkLayer.icmp (id=17) at event #4, t=0.008442657441: check_and_cast(): cannot cast (inet::GenericNetworkProtocolControlInfo*) to type 'inet::IPv4ControlInfo *'.
An instance of IPv4NetworkConfigurator in network has to be named configurator. After changing its name your second problem should be resolved too.
Moreover, the name of RadioMedium instance module has to be: radioMedium (instead of iRadioMedium).
EDIT
You have made two mistakes.
AccessPoint does not have network layer, because it only relays and sends MAC frames using MAC layer and MAC addresses - like in real network. As a consequence, it does not have an IP address and it is impossible to send ICMP ping to it.
OMNeT++ allows to use module's name instead of an IP address in ini file, for example **.destAddr = "wirelessHost1". In your ini you are trying to use non existing accessPoint instead of accessPoint1 (which is incorrectly because of the first error).
I suggest adding a new WirelesHost (for example wirelessHost2) and sending ping towards it, i.e.
*.wirelessHost1.pingApp[0].destAddr = "wirelessHost2"

How to Create CaffeDB training data for siamese networks out of image directory

I need some help to create a CaffeDB for siamese CNN out of a plain directory with images and label-text-file. Best would be a python-way to do it.
The problem is not to walk through the directory and making pairs of images. My problem is more of making a CaffeDB out of those pairs.
So far I only used convert_imageset to create a CaffeDB out of an image directory.
Thanks for help!
Why don't you simply make two datasets using good old convert_imagest?
layer {
name: "data_a"
top: "data_a"
top: "label_a"
type: "Data"
data_param { source: "/path/to/first/data_lmdb" }
...
}
layer {
name: "data_b"
top: "data_b"
top: "label_b"
type: "Data"
data_param { source: "/path/to/second/data_lmdb" }
...
}
As for the loss, since every example has a class label you need to convert label_a and label_b into a same_not_same_label. I suggest you do this "on-the-fly" using a python layer. In the prototxt add the call to python layer:
layer {
name: "a_b_to_same_not_same_label"
type: "Python"
bottom: "label_a"
bottom: "label_b"
top: "same_not_same_label"
python_param {
# the module name -- usually the filename -- that needs to be in $PYTHONPATH
module: "siamese"
# the layer name -- the class name in the module
layer: "SiameseLabels"
}
propagate_down: false
}
Create siamese.py (make sure it is in your $PYTHONPATH). In siamese.py you should have the layer class:
import sys, os
sys.path.insert(0,os.environ['CAFFE_ROOT'] + '/python')
import caffe
class SiameseLabels(caffe.Layer):
def setup(self, bottom, top):
if len(bottom) != 2:
raise Exception('must have exactly two inputs')
if len(top) != 1:
raise Exception('must have exactly one output')
def reshape(self,bottom,top):
top[0].reshape( *bottom[0].shape )
def forward(self,bottom,top):
top[0].data[...] = (bottom[0].data == bottom[1].data).astype('f4')
def backward(self,top,propagate_down,bottom):
# no back prop
pass
Make sure you shuffle the examples in the two sets in a different manner, so you get non-trivial pairs. Moreover, if you construct the first and second data sets with different number of examples, then you will see different pairs at each epoch ;)
Make sure you construct the network to share the weights of the duplicated layers, see this tutorial for more information.