Bi-Encoder model_name and train_from_scratch not in object/dict - jaseci

When I am attempting to train the bi_enc from the bi_enc.jac file provided in the codelab I am provided the following error:
2022-11-11 11:35:46,557 - ERROR - rt_error: enc.jac:blank - line 13, col 19 - rule atom_trailer - Key model_name not found in object/dict.
ERROR:core:enc.jac:blank - line 13, col 19 - rule atom_trailer - Key model_name not found in object/dict.
{
"success": false,
"report": [],
"final_node": "urn:uuid:9730edd1-de38-4546-aed4-f291709c86b0",
"yielded": false,
"errors": [
"enc.jac:blank - line 8, col 32 - rule atom_trailer - Key train_from_scratch not found in object/dict.",
"enc.jac:blank - line 13, col 19 - rule atom_trailer - Key model_name not found in object/dict."
]
}
The lines of code where the error is being thrown:
node bi_enc {
can bi_enc.train, bi_enc.infer;
can train {
train_data = file.load_json(visitor.train_file);
bi_enc.train(
dataset=train_data,
from_scratch=visitor.train_from_scratch,
training_parameters={
"num_train_epochs": visitor.num_train_epochs
}
);
if (visitor.model_name):
bi_enc.save_model(model_path=visitor.model_name);
}
I have not made any changes to the bi_enc.jac file nor defined any other variables named bi_enc in any of my linked files. What could be causing this issue?
Thanks

Related

Bicep template conversion - Error BCP034: The enclosing array expected an item of type "module[] | (resource | module)

I have converted an ARM template into a bicep template and during the conversion, I got the following errors for the following lines.
module virtualMachineName_VmAlertsRule_0 './nested_virtualMachineName_VmAlertsRule_0.bicep' = {
name: '${virtualMachineName}-VmAlertsRule-0'
params: {
name: 'Percentage CPU - vm-name'
severity: 3
allOf: [
{
name: 'Metric1'
metricName: 'Percentage CPU'
metricNamespace: 'Microsoft.Compute/virtualMachines'
operator: 'GreaterThan'
timeAggregation: 'Average'
criterionType: 'StaticThresholdCriterion'
threshold: 80
}
]
actionGroups: [
{
actionGroupId: '/subscriptions/xxxxxx/resourceGroups/my_resource_group/providers/microsoft.insights/actionGroups/RecommendedAlertRules-AG-1'
webhookProperties: {
}
}
]
location: 'Global'
vmResourceId: '/subscriptions/${subscription().subscriptionId}/resourceGroups/${resourceGroup().name}/providers/Microsoft.Compute/virtualMachines/${virtualMachineName}'
}
dependsOn: [
resourceId(virtualMachineRG, 'Microsoft.Compute/virtualMachines', virtualMachineName)
resourceId(virtualMachineRG, 'Microsoft.Resources/deployments', '${virtualMachineName}-AlertsActionGroup')
]
}
Error is as follows.
Error BCP034: The enclosing array expected an item of type "module[] | (resource | module) | resource[]", but the provided item was of type "string".
From my research, it appears I should be using a dynamic function, however I am unsure about this. The errors are littered for all the alerts, however I am confident that once one is fixed, the others can easily be resolved also.

How to connect multiple slave devices under the modbus-serial (rtu) connector

this is my Configuration File. I'm using the new profile https://thingsboard.io/docs/iot-gateway/config/modbus/ [new_modbus.json], in the new configuration file, seems to be able to configure several different devices in { master : { "slaves" : [] }}. And when I do, I couldn't get the right results.
{
"master":{
"slaves":[
{
"unitId":1,
"deviceName":"test1",
"attributesPollPeriod":5000,
"timeseriesPollPeriod":5000,
"sendDataOnlyOnChange":false,
"attributes":[
{
"byteOrder":"BIG",
"tag":"temperature",
"type":"bytes",
"functionCode":3,
"registerCount":1,
"address":1
}
],
"timeseries":[
{
"tag":"distance",
"type":"bytes",
"functionCode":3,
"objectsCount":1,
"address":2
}
],
"attributeUpdates":[
{
"tag":"shared_value_1",
"type":"32uint",
"functionCode":6,
"objectsCount":2,
"address":3
},
{
"tag":"shared_value_2",
"type":"32uint",
"functionCode":6,
"objectsCount":2,
"address":4
}
],
"rpc":[
{
"tag":"bearing_bpfo",
"type":"32uint",
"functionCode":6,
"objectsCount":2,
"address":5
}
],
"host":null,
"port":"/dev/ttyUSB0",
"type":"serial",
"method":"rtu",
"timeout":35,
"byteOrder":"BIG",
"wordOrder":"BIG",
"retries":null,
"retryOnEmpty":null,
"retryOnInvalid":null,
"baudrate":9600,
"pollPeriod":5000,
"connectAttemptCount":1
},
{
"unitId":2,
"deviceName":"Test2",
"attributesPollPeriod":5000,
"timeseriesPollPeriod":5000,
"sendDataOnlyOnChange":false,
"attributes":[
{
"byteOrder":"BIG",
"tag":"temperature",
"type":"bytes",
"functionCode":3,
"registerCount":1,
"address":10
}
],
"timeseries":[
{
"tag":"distance",
"type":"bytes",
"functionCode":3,
"objectsCount":1,
"address":11
}
],
"attributeUpdates":[
{
"tag":"shared_value_1",
"type":"32uint",
"functionCode":6,
"objectsCount":2,
"address":12
}
],
"host":null,
"port":"/dev/ttyUSB0",
"type":"serial",
"method":"rtu",
"timeout":35,
"byteOrder":"BIG",
"wordOrder":"BIG",
"retries":null,
"retryOnEmpty":null,
"retryOnInvalid":null,
"baudrate":9600,
"pollPeriod":5000,
"connectAttemptCount":5
}
]
},
"slave":null
}
The Connector name I am using is the Modbus Connector, and the version information for my deployment is as follows:
OS: Raspberry Pi
Thingsboard IoT Gateway version : 3.0.1
Python version : 3.9.2
Error traceback:
""2022-05-11 15:28:10" - |DEBUG| - [bytes_modbus_uplink_converter.py] - bytes_modbus_uplink_converter - convert - 87 - datatype: telemetry key: distance value: None"
""2022-05-11 15:28:10" - |DEBUG| - [bytes_modbus_uplink_converter.py] - bytes_modbus_uplink_converter - convert - 92 - {'deviceName': 'testUpdate', 'deviceType': 'default', 'telemetry': [], 'attributes': []}"
""2022-05-11 15:28:10" - |ERROR| - [bytes_modbus_uplink_converter.py] - bytes_modbus_uplink_converter - convert - 83 - Modbus Error: [Input/Output] device reports readiness to read but returned no data (device disconnected or multiple access on port?)"
NoneType: None
""2022-05-11 15:28:10" - |DEBUG| - [bytes_modbus_uplink_converter.py] - bytes_modbus_uplink_converter - convert - 87 - datatype: telemetry key: distance value: None"
""2022-05-11 15:28:10" - |DEBUG| - [bytes_modbus_uplink_converter.py] - bytes_modbus_uplink_converter - convert - 92 - {'deviceName': 'RpcTest', 'deviceType': 'default', 'telemetry': [], 'attributes': []}"
""2022-05-11 15:28:10" - |ERROR| - [bytes_modbus_uplink_converter.py] - bytes_modbus_uplink_converter - convert - 83 - Modbus Error: [Input/Output] device reports readiness to read but returned no data (device disconnected or multiple access on port?)"
NoneType: None

greatexpectations - PySpark - ValueError: Unrecognized spark type: DecimalType(20,0)

I am trying to implement the to_be_of_type expectation mentioned here for DecimalType with precision and scale in PySpark.
However, I am getting following error while testing it.
{
"success": False,
"expectation_config": {
"expectation_type": "expect_column_values_to_be_of_type",
"meta": {},
"kwargs": {
"column": "project_id",
"type_": "DecimalType(20,0)",
"result_format": {
"result_format": "SUMMARY"
}
}
},
"meta": {},
"exception_info": {
"raised_exception": True,
"exception_message": "ValueError: Unrecognized spark type: DecimalType(20,0)",
"exception_traceback": "Traceback (most recent call last):\n File "/home/spark/.local/lib/python3.7/site-packages/great_expectations/dataset/sparkdf_dataset.py", line 1196, in expect_column_values_to_be_of_type\n success = issubclass(col_type, getattr(sparktypes, type_))\nAttributeError: module \"pyspark.sql.types\" has no attribute \"DecimalType(20,0)\"\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/home/spark/.local/lib/python3.7/site-packages/great_expectations/data_asset/data_asset.py", line 275, in wrapper\n return_obj = func(self, **evaluation_args)\n File "/home/spark/.local/lib/python3.7/site-packages/great_expectations/dataset/sparkdf_dataset.py", line 1201, in expect_column_values_to_be_of_type\n raise ValueError(f"Unrecognized spark type: {type_
}")\nValueError: Unrecognized spark type: DecimalType(20,0)\n"
},
"result": {}
},
Is there a possibility to validate the DecimalType with precision and scale values?
I am using GE version 0.14.12 and PySpark version 2.4.3.
Let me know if you need any further information.

Error: "serving_default not found in signature def" when testing prediction

I followed this tutorial and came to the point where I can test a prediction using the following code:
{
"instances": [
{"csv_row": "44, Private, 160323, Some-college, 10, Married-civ-spouse, Machine-op-inspct, Husband, Black, Male, 7688, 0, 40, United-States", "key": "dummy-key"}
]
}
However, I am getting the following error:
{
"error": "{ \"error\": \"Serving signature name: \\\"serving_default\\\" not found in signature def\" }"
}
I presume the input format doesn't represent the expected input, but am not entirely should what should be expected.
Any ideas as to what is causing the example code to throw this error?
I finally figured it out: I loaded the tensorflow model in jupyter notebook and printed out the signatures:
new_model = tf.keras.models.load_model('modelPath')
print(list(new_model.signatures.keys()))
the result was: [u'predict']
so the command i used to get a prediction is:
georg#Georgs-MBP ~ % gcloud ai-platform predict
--model $MODEL_NAME
--version "v1"
--json-instances sample_input.json
--format "value(predictions[0].classes[0])"
--signature-name "predict"
result:
Using endpoint [https://europe-west3-ml.googleapis.com/]
<=50K
To add signature serving_default:
import tensorflow as tf
m = tf.saved_model.load("tf2-preview_inception_v3_classification_4")
print(m.signatures) # _SignatureMap({}) - Empty
t_spec = tf.TensorSpec([None,None,None,3], tf.float32)
c_func = m.__call__.get_concrete_function(inputs=t_spec)
signatures = {'serving_default': c_func}
tf.saved_model.save(m, 'tf2-preview_inception_v3_classification_5', signatures=signatures)
# Test new model
m5 = tf.saved_model.load("tf2-preview_inception_v3_classification_5")
print(m5.signatures) # _SignatureMap({'serving_default': <ConcreteFunction signature_wrapper(*, inputs) at 0x17316DC50>})

How can I correct this error with AWS CloudFormation template

Team, I'm stressing out because I cannot find the errors with the following JSON script I'm trying to run in AWS Cloudformation; I'm receiving the following error:
(Cannot render the template because of an error.: YAMLException: end of the stream or a document separator is expected at line 140, column 65: ... e" content="{"version": "4", "rollouts& ... ^
<meta name="optimizely-datafile" content="{"version": "4", "rollouts": [], "typedAudiences": [], "anonymizeIP": true, "projectId":
Please help!!!