OMNET++ wireless communication client-server - simulation

It's the first time I'm using OMNET++.
I want to start from the very basic stuff to understand how it works.
I successuffly created my first simulation with two hosts continuosly exhanging mesagges (from tictoc example).
What I would like to do now is to simulate a simple client-server wireless communication between one AP and one wireless node. I'm trying to do that using elements from the inet class but I'm stuck and it is not working.
import inet.networklayer.configurator.base.NetworkConfiguratorBase;
import inet.networklayer.configurator.ipv4.IPv4NetworkConfigurator;
import inet.networklayer.configurator.ipv4.IPv4NodeConfigurator;
import inet.node.inet.WirelessHost;
import inet.node.wireless.AccessPoint;
import inet.physicallayer.common.packetlevel.RadioMedium;
import inet.physicallayer.contract.packetlevel.IRadioMedium;
import inet.physicallayer.ieee80211.packetlevel.Ieee80211RadioMedium;
//
// TODO documentation
//
network net
{
string mediumType = default("IdealRadioMedium");
#display("bgb=620,426");
submodules:
wirelessHost1: WirelessHost {
#display("p=423,164");
}
accessPoint1: AccessPoint {
#display("p=147,197");
}
iRadioMedium: <mediumType> like IRadioMedium {
#display("p=523,302");
}
iPv4NetworkConfigurator: IPv4NetworkConfigurator {
#display("p=270,324");
assignDisjunctSubnetAddresses = false;
}
}
and then I created a wirelessHost.cc source file using the tictoc beahviour to make the two nodes communicate.
But it is not working, I get this error:
<!> Error in module (inet::IPv4NodeConfigurator) infrastructure.wirelessHost1.networkLayer.configurator (id=13) during network initialization: Configurator module 'configurator' not found (check the 'networkConfiguratorModule' parameter).
But before doing something, it was another error about the Access Point (could not find wlan[0] module).
Can someone help me to understand how to configure this model?
EDIT
Here's the configuration .ini file
[General]
network = infrastructure
#cmdenv-output-file = omnetpp.log
#debug-on-errors = true
tkenv-plugin-path = ../../../etc/plugins
#record-eventlog = true
**.constraintAreaMinX = 0m
**.constraintAreaMinY = 0m
**.constraintAreaMinZ = 0m
**.constraintAreaMaxX = 600m
**.constraintAreaMaxY = 500m
**.constraintAreaMaxZ = 0m
**.mobility.typename = "StationaryMobility"
**.mobilityType = "StationaryMobility"
# access point
*.accessPoint.wlan[0].mac.address = "004444444444"
*.accessPoint.wlan[0].radio.channelNumber = 11
# host1 is associated with AP1 on channel 0
**.wirelessHost1.wlan[0].mgmt.accessPointAddress = "004444444444"
*.wirelessHost1.**.mgmtType = "Ieee80211MgmtSTASimplified"
# global data rates
**.wlan*.bitrate = 11Mbps
# application level: host1 pings host2
**.numPingApps = 1
*.wirelessHost1.pingApp[0].destAddr = "accessPoint"
*.wirelessHost1.pingApp[0].sendInterval = 10ms
but running the simulation I get
<!> Error in module (inet::ICMP) infrastructure.wirelessHost1.networkLayer.icmp (id=17) at event #4, t=0.008442657441: check_and_cast(): cannot cast (inet::GenericNetworkProtocolControlInfo*) to type 'inet::IPv4ControlInfo *'.

An instance of IPv4NetworkConfigurator in network has to be named configurator. After changing its name your second problem should be resolved too.
Moreover, the name of RadioMedium instance module has to be: radioMedium (instead of iRadioMedium).
EDIT
You have made two mistakes.
AccessPoint does not have network layer, because it only relays and sends MAC frames using MAC layer and MAC addresses - like in real network. As a consequence, it does not have an IP address and it is impossible to send ICMP ping to it.
OMNeT++ allows to use module's name instead of an IP address in ini file, for example **.destAddr = "wirelessHost1". In your ini you are trying to use non existing accessPoint instead of accessPoint1 (which is incorrectly because of the first error).
I suggest adding a new WirelesHost (for example wirelessHost2) and sending ping towards it, i.e.
*.wirelessHost1.pingApp[0].destAddr = "wirelessHost2"

Related

uPY uart not communicating correctly with EG25-G

I had a motor controller connected to GP0 and GP1 so I know they work, however I cant seem to get a response from the SIM controller. Without the pico attached to the board, I can get it to work, but when I add the pico it seems like it wont send AT commands or translate received data if the pico is getting any at all. I have tried to run the code line by line in a live session and all I get is a single number that is equal to the number of letters inside the string that I am sending to the sim controller. ie uart.write(bytearray(b'ATE1\r\n')) would return >>> 6 6 being the number of characters in the code after b. I'm ordering a new pico to see if just maybe it was my sub par soldering, but in the mean time I could see if anyone more experienced than I can point out a error.
import machine
import os
import utime
import time
import binascii
from machine import UART
pwr_enable = 22 # EG25_4G Power key connected on GP22
uart_port = 0
uart_baud = 115200
# Initialize UART0
uart = machine.UART(uart_port, uart_baud)
print(os.uname())
def wait_resp_info(timeout=3000):
prvmills = utime.ticks_ms()
info = b""
while (utime.ticks_ms()-prvmills) < timeout:
if uart.any():
info = b"".join([info, uart.read(1)])
print(info.decode())
return info
def Check_and_start(): # Initialize SIM Module
while True:
uart.write(bytearray(b'ATE1\r\n'))
utime.sleep(10)
uart.write(bytearray(b'AT\r\n'))
rec_temp = wait_resp_info()
print(wait_resp_info())
print(rec_temp)
print(rec_temp.decode())
utime.sleep(10)
if 'OK' in rec_temp.decode():
print('OK response from AT command\r\n' + rec_temp.decode())
break
else:
power = machine.Pin(pwr_enable, machine.Pin.OUT)
power.value(1)
utime.sleep(2)
power.value(0)
print('No response, restarting\r\n')
utime.sleep(10)
def Network_check():# Network connectivity check
for i in range(1, 3):
if Send_command("AT+CGREG?", "0,1") == 1:
print('Connected\r\n')
break
else:
print('Device is NOT connected\r\n')
utime.sleep(2)
continue
def Str_to_hex_str(string):
str_bin = string.encode('utf-8')
return binascii.hexlify(str_bin).decode('utf-8')
Check_and_start()
Network_check()
Response is
>>> Check_and_start()
b''
b'\x00\x00'
No response, restarting
New Pico fixed my issue, I believe it to be that my inadequate soldering skills created the issue. Symptoms were, no UART data was being transmitted or received through UART pins 0 and 1. Solution was new Pico board was inserted in place of old one, same code was uploaded and ran successfully first time.

DPDK Hash failing to lookup data from secondary process

Failing to use existing rte Hash from secondary process:
h = rte_hash_find_existing("some_hash");
if (h) {
// this will work, in case we re-create
//rte_hash_free(h);
}
else {
h = rte_hash_create (&params);
}
// using the hash will crash the process with:
// Program received signal SIGSEGV, Segmentation fault.
ret = rte_hash_lookup_data (h,name,&data);
DPDK Version: dpdk-19.02
Build Mode Static: CONFIG_RTE_BUILD_SHARED_LIB=n
The Primary and secondary processes are different binaries but linked to the same DPDK library
The Key is added in primary as follows
struct cdev_key {
uint64_t len;
};
struct cdev_key key = { 0 };
if (rte_hash_add_key_data (testptr, &key,(void *) &test) < 0) {
fprintf (stderr,"add failed errno: %s\n", rte_strerror(rte_errno));
}
and used in secondary as follows:
printf("Looking for data\n");
struct cdev_key key = { 0 };
int ret = rte_hash_lookup_data (h,&key,&data);
with DPDK version 19.02, I am able to run 2 separate binaries without issues.
[EDIT-1] based on the update in the ticket, I am able to lookup hash entry added from primary in the secondary process.
Priamry log:
rte_hash_count 1 ret:val 0x0:0x0
Secondary log:
0x17fd61380 rte_hash_count 1
rte_hash_count 1 key:val 0:0
note: if using rte_hash_lookup please remember to disable Linux ASLR via echo 0 | tee /proc/sys/kernel/randomize_va_space.
Binary 1: modified example/skeleton to create hash test
CMD-1: ./build/basicfwd -l 5 -w 0000:08:00.1 --vdev=net_tap0 --socket-limit=2048,1 --file-prefix=test
Binary 2: modified helloworld to lookup for hash test, else assert
CMD-2: for i in {1..20000}; do du -kh /var/run/dpdk/; ./build/helloworld -l 6 --proc-type=secondary --log-level=3 --file-prefix=test; done
Changing or removing the file-prefix results in assert logic to be hit.
note: DPDK 19.02 has the inherent bug which does not cleanup the /var/run/dpdk/; hence recommends to use 19.11.2 LTS
Code-1:
struct rte_hash_parameters test = {0};
test.name = "test";
test.entries = 32;
test.key_len = sizeof(uint64_t);
test.hash_func = rte_jhash;
test.hash_func_init_val = 0;
test.socket_id = 0;
struct rte_hash *testptr = rte_hash_create(&test);
if (testptr == NULL) {
rte_panic("Failed to create test hash, errno = %d\n", rte_errno);
}
Code-2:
assert(rte_hash_find_existing("test"));
printf("hello from core %u::%p\n", lcore_id, rte_hash_find_existing("test"));
printf("hello from core %u::%p\n", lcore_id, rte_hash_find_existing("test1"));
As mentioned in DPDK Programmers Guide, using multiprocessor functionalities come with some restrictions. One of them is that the pointer to a function can not be shared between processes. As a result the hashing function is not available on the secondary process. The suggested work around is to the hashing part in the primary process and the secondary process accessing the hash table using the hash value instead of the key.
From DPDK Guide:
To work around this issue, it is recommended that multi-process applications perform the hash calculations by directly calling the hashing function from the code and then using the rte_hash_add_with_hash()/rte_hash_lookup_with_hash() functions instead of the functions which do the hashing internally, such as rte_hash_add()/rte_hash_lookup().
Please refer to the guide for more information [36.3. Multi-process Limitations]
link: https://doc.dpdk.org/guides/prog_guide/multi_proc_support.html
In the time of writing this answer the guide is for DPDK 20.08.

Core WLAN MCS Index?

I'm trying to recreate the information displayed for the current Wi-Fi network when option-clicking on the Wi-Fi status bar item. One of the parameters shown is the MCS Index, but I can't find any way to query this value using the CWInterface class, which is where I am getting most of the other data:
if let interface = CWWiFiClient.shared().interface() {
rssi = interface.rssiValue()
noise = interface.noiseMeasurement()
// etc.
}
Since both the Wi-Fi status bar item and the airport command line tool display the MCS Index, it seems like there should be some way to query it:
MacBook:~ mark$ /System/Library/PrivateFrameworks/Apple80211.framework/Versions/Current/Resources/airport -I
agrCtlRSSI: -46
agrExtRSSI: 0
agrCtlNoise: -90
agrExtNoise: 0
state: running
op mode: station
lastTxRate: 878
maxRate: 1300
lastAssocStatus: 0
802.11 auth: open
link auth: wpa2-psk
BSSID: xx:xx:xx:xx:xx:xx
SSID: MyWiFi
MCS: 7
channel: 149,80
I've also seem some Python sample code that seems to indicate that the MCS Index should be available, but I don't see it in the docs or code completion.
Is there some way to get this value through Core WLAN or some other framework, or is this something I need to calculate based on other values?
I found another Python script wifi_status.py
which reports the WiFi status. From the lines
def wifi_status(properties=('bssid', 'channel', 'txRate', 'mcsIndex', 'rssi', 'noise')):
xface = CWWiFiClient.sharedWiFiClient().interface()
while True:
yield({name: getattr(xface, name)() for name in properties})
one can conclude that these attributes can be retrieved with
Key-Value Coding.
And that really works:
if let iface = CWWiFiClient.shared().interface() {
if let mcsIndex = iface.value(forKey: "mcsIndex") as? Int {
print(mcsIndex)
}
}
But I have now idea if that approach is officially supported,
or will work in the future, so use at your own risk.

Deploying Keras model to Google Cloud ML for serving predictions

I need to understand how to deploy models on Google Cloud ML. My first task is to deploy a very simple text classifier on the service. I do it in the following steps (could perhaps be shortened to fewer steps, if so, feel free to let me know):
Define the model using Keras and export to YAML
Load up YAML and export as a Tensorflow SavedModel
Upload model to Google Cloud Storage
Deploy model from storage to Google Cloud ML
Set the upload model version as default on the models website.
Run model with a sample input
I've finally made step 1-5 work, but now I get this strange error seen below when running the model. Can anyone help? Details on the steps is below. Hopefully, it can also help others that are stuck on one of the previous steps. My model works fine locally.
I've seen Deploying Keras Models via Google Cloud ML and Export a basic Tensorflow model to Google Cloud ML, but they seem to be stuck on other steps of the process.
Error
Prediction failed: Exception during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="In[0] is not a matrix
[[Node: MatMul = MatMul[T=DT_FLOAT, _output_shapes=[[-1,64]], transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/cpu:0"](Mean, softmax_W/read)]]")
Step 1
# import necessary classes from Keras..
model_input = Input(shape=(maxlen,), dtype='int32')
embed = Embedding(input_dim=nb_tokens,
output_dim=256,
mask_zero=False,
input_length=maxlen,
name='embedding')
x = embed(model_input)
x = GlobalAveragePooling1D()(x)
outputs = [Dense(nb_classes, activation='softmax', name='softmax')(x)]
model = Model(input=[model_input], output=outputs, name="fasttext")
# export to YAML..
Step 2
from __future__ import print_function
import sys
import os
import tensorflow as tf
from tensorflow.contrib.session_bundle import exporter
import keras
from keras import backend as K
from keras.models import model_from_config, model_from_yaml
from optparse import OptionParser
EXPORT_VERSION = 1 # for us to keep track of different model versions (integer)
def export_model(model_def, model_weights, export_path):
with tf.Session() as sess:
init_op = tf.global_variables_initializer()
sess.run(init_op)
K.set_learning_phase(0) # all new operations will be in test mode from now on
yaml_file = open(model_def, 'r')
yaml_string = yaml_file.read()
yaml_file.close()
model = model_from_yaml(yaml_string)
# force initialization
model.compile(loss='categorical_crossentropy',
optimizer='adam')
Wsave = model.get_weights()
model.set_weights(Wsave)
# weights are not loaded as I'm just testing, not really deploying
# model.load_weights(model_weights)
print(model.input)
print(model.output)
pred_node_names = output_node_names = 'Softmax:0'
num_output = 1
export_path_base = export_path
export_path = os.path.join(
tf.compat.as_bytes(export_path_base),
tf.compat.as_bytes('initial'))
builder = tf.saved_model.builder.SavedModelBuilder(export_path)
# Build the signature_def_map.
x = model.input
y = model.output
values, indices = tf.nn.top_k(y, 5)
table = tf.contrib.lookup.index_to_string_table_from_tensor(tf.constant([str(i) for i in xrange(5)]))
prediction_classes = table.lookup(tf.to_int64(indices))
classification_inputs = tf.saved_model.utils.build_tensor_info(model.input)
classification_outputs_classes = tf.saved_model.utils.build_tensor_info(prediction_classes)
classification_outputs_scores = tf.saved_model.utils.build_tensor_info(values)
classification_signature = (
tf.saved_model.signature_def_utils.build_signature_def(inputs={tf.saved_model.signature_constants.CLASSIFY_INPUTS: classification_inputs},
outputs={tf.saved_model.signature_constants.CLASSIFY_OUTPUT_CLASSES: classification_outputs_classes, tf.saved_model.signature_constants.CLASSIFY_OUTPUT_SCORES: classification_outputs_scores},
method_name=tf.saved_model.signature_constants.CLASSIFY_METHOD_NAME))
tensor_info_x = tf.saved_model.utils.build_tensor_info(x)
tensor_info_y = tf.saved_model.utils.build_tensor_info(y)
prediction_signature = (tf.saved_model.signature_def_utils.build_signature_def(
inputs={'images': tensor_info_x},
outputs={'scores': tensor_info_y},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={'predict_images': prediction_signature,
tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: classification_signature,},
legacy_init_op=legacy_init_op)
builder.save()
print('Done exporting!')
raise SystemExit
if __name__ == '__main__':
usage = "usage: %prog [options] arg"
parser = OptionParser(usage)
(options, args) = parser.parse_args()
if len(args) < 3:
raise ValueError("Too few arguments!")
model_def = args[0]
model_weights = args[1]
export_path = args[2]
export_model(model_def, model_weights, export_path)
Step 3
gsutil cp -r fasttext_cloud/ gs://quiet-notch-xyz.appspot.com
Step 4
from __future__ import print_function
from oauth2client.client import GoogleCredentials
from googleapiclient import discovery
from googleapiclient import errors
import time
projectID = 'projects/{}'.format('quiet-notch-xyz')
modelName = 'fasttext'
modelID = '{}/models/{}'.format(projectID, modelName)
versionName = 'Initial'
versionDescription = 'Initial release.'
trainedModelLocation = 'gs://quiet-notch-xyz.appspot.com/fasttext/'
credentials = GoogleCredentials.get_application_default()
ml = discovery.build('ml', 'v1', credentials=credentials)
# Create a dictionary with the fields from the request body.
requestDict = {'name': modelName, 'description': 'Online predictions.'}
# Create a request to call projects.models.create.
request = ml.projects().models().create(parent=projectID, body=requestDict)
# Make the call.
try:
response = request.execute()
except errors.HttpError as err:
# Something went wrong, print out some information.
print('There was an error creating the model.' +
' Check the details:')
print(err._get_reason())
# Clear the response for next time.
response = None
raise
time.sleep(10)
requestDict = {'name': versionName,
'description': versionDescription,
'deploymentUri': trainedModelLocation}
# Create a request to call projects.models.versions.create
request = ml.projects().models().versions().create(parent=modelID,
body=requestDict)
# Make the call.
try:
print("Creating model setup..", end=' ')
response = request.execute()
# Get the operation name.
operationID = response['name']
print('Done.')
except errors.HttpError as err:
# Something went wrong, print out some information.
print('There was an error creating the version.' +
' Check the details:')
print(err._get_reason())
raise
done = False
request = ml.projects().operations().get(name=operationID)
print("Adding model from storage..", end=' ')
while (not done):
response = None
# Wait for 10000 milliseconds.
time.sleep(10)
# Make the next call.
try:
response = request.execute()
# Check for finish.
done = True # response.get('done', False)
except errors.HttpError as err:
# Something went wrong, print out some information.
print('There was an error getting the operation.' +
'Check the details:')
print(err._get_reason())
done = True
raise
print("Done.")
Step 5
Use website.
Step 6
def predict_json(instances, project='quiet-notch-xyz', model='fasttext', version=None):
"""Send json data to a deployed model for prediction.
Args:
project (str): project where the Cloud ML Engine Model is deployed.
model (str): model name.
instances ([Mapping[str: Any]]): Keys should be the names of Tensors
your deployed model expects as inputs. Values should be datatypes
convertible to Tensors, or (potentially nested) lists of datatypes
convertible to tensors.
version: str, version of the model to target.
Returns:
Mapping[str: any]: dictionary of prediction results defined by the
model.
"""
# Create the ML Engine service object.
# To authenticate set the environment variable
# GOOGLE_APPLICATION_CREDENTIALS=<path_to_service_account_file>
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}'.format(project, model)
if version is not None:
name += '/versions/{}'.format(version)
response = service.projects().predict(
name=name,
body={'instances': instances}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
return response['predictions']
Then run function with test input: predict_json({'inputs':[[18, 87, 13, 589, 0]]})
There is now a sample demonstrating the use of Keras on CloudML engine, including prediction. You can find the sample here:
https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/census/keras
I would suggest comparing your code to that code.
Some additional suggestions that will still be relevant:
CloudML Engine currently only supports using a single signature (the default signature). Looking at your code, I think prediction_signature is more likely to lead to success, but you haven't made that the default signature. I suggest the following:
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: prediction_signature,},
legacy_init_op=legacy_init_op)
If you are deploying to the service, then you would invoke prediction like so:
predict_json({'images':[[18, 87, 13, 589, 0]]})
If you are testing locally using gcloud ml-engine local predict --json-instances the input data is slightly different (matches that of the batch prediction service). Each newline-separated line looks like this (showing a file with two lines):
{'images':[[18, 87, 13, 589, 0]]}
{'images':[[21, 85, 13, 100, 1]]}
I don't actually know enough about the shape of model.x to ensure the data being sent is correct for your model.
By way of explanation, it may be insightful to consider the difference between the Classification and Prediction methods in SavedModel. One difference is that, when using tensorflow_serving, which is based on gRPC, which is strongly typed, Classification provides a strongly-typed signature that most classifiers can use. Then you can reuse the same client on any classifier.
That's not overly useful when using JSON since JSON isn't strongly typed.
One other difference is that, when using tensorflow_serving, Prediction accepts column-based inputs (a map from feature name to every value for that feature in the whole batch) whereas Classification accepts row based inputs (each input instance/example is a row).
CloudML abstracts that away a bit and always requires row-based inputs (a list of instances). We even though we only officially support Prediction, but Classification should work as well.

Problem reading Serial Port C#.net 2.0 to get Weighing machine output

I'm trying to read weight from Sartorius Weighing Scale model No BS2202S using the following code in C#.net 2.0 on a Windows XP machine:
public string readWeight()
{
string lastError = "";
string weightData = "";
SerialPort port = new SerialPort();
port.PortName = "COM1";
port.BaudRate = 9600;
port.Parity = Parity.Even;
port.DataBits = 7;
port.StopBits = StopBits.One;
port.Handshake = Handshake.RequestToSend;
try {
port.Open();
weightData = port.ReadExisting();
if(weightData == null || weightData.Length == 0) {
lastError = "Unable to read weight. The data returned form weighing machine is empty or null.";
return lastError;
}
}
catch(TimeoutException) {
lastError = "Operation timed out while reading weight";
return lastError;
}
catch(Exception ex) {
lastError = "The following exception occurred while reading data." + Environment.NewLine + ex.Message;
return lastError;
}
finally {
if(port.IsOpen == true) {
port.Close();
port.Dispose();
}
}
return weightData;
}
I'm able to read the weight using Hyperterminal application (supplied with Windows XP) with the same serial port parameters given above for opening the port. But from the above code snippet, I can open the port and each time it is returning empty data.
I tried opening port using the code given this Stack Overflow thread, still it returns empty data.
Kindly assist me.
I know this is probably old now ... but for future reference ...
Look at the handshaking. There is both hardware handshaking and software handshaking. Your problem could be either - so you need to try both.
For hardware handshaking you can try:
mySerialPort.DtrEnable = True
mySerialPort.RtsEnable = True
Note that
mySerialPort.Handshake = Handshake.RequestToSend
I do not think sets the DTR line which some serial devices might require
Software handshaking is also known as XON/XOFF and can be set with
mySerialPort.Handshake = Handshake.XOnXOff
OR
mySerialPort.Handshake = Handshake.RequestToSendXOnXOff
You may still need to enable DTR
When all else fails - dont forget to check all of these combinations of handshaking.
Since someone else will probably have trouble with this in the future, hand shaking is a selectable option.
In most of the balances you will see the options for Software, Hardware 2 char, Hardware 1 char. The default setting for the Sartorius balances is Hardware 2 Char. I usually recommend changing to Software.
Also if it stops working all together it can often be fixed by defaulting the unit using the 9 1 1 parameter. And then resetting the communication settings.
An example of how to change the settings can be found on the manual on this page:
http://www.dataweigh.com/products/sartorius/cpa-analytical-balances/