pyfmi fails with Stepsize must be non-negative and divisible by 0.2' - simulink

I did the world's simplest simulink model which sums two constants and send the result to an outport. I exported this to an fmu model and try to run it in pyfmi.
I get an error with
FMUException: The simulation failed. See the log for more information. Return flag 3
The log says
['FMIL: module = FMILIB, log level = 5: Allocating FMIL context',
'FMIL: module = FMILIB, log level = 5: Parsing model description XML',
'FMIL: module = FMI2XML, log level = 5: Parsing XML element fmiModelDescription',
'FMIL: module = FMI2XML, log level = 5: Parsing XML element CoSimulation',
'FMIL: module = FMI2XML, log level = 5: Parsing XML element VendorAnnotations',
'FMIL: module = FMI2XML, log level = 5: Parsing XML element ModelVariables',
'FMIL: module = FMI2XML, log level = 4: [Line:31] Detected during parsing:',
'FMIL: module = FMI2XML, log level = 2: Start attribute is required for this causality, variability and initial combination',
'FMIL: module = FMI2XML, log level = 5: Building alias index',
'FMIL: module = FMI2XML, log level = 5: Parsing XML element ModelStructure',
'FMIL: module = FMI2XML, log level = 5: Parsing XML element Outputs',
'FMIL: module = FMI2XML, log level = 5: Parsing XML element InitialUnknowns',
'FMIL: module = FMILIB, log level = 5: Parsing finished successfully',
"FMIL: module = FMILIB, log level = 4: Loading 'linux64' binary with 'default' platform types",
'FMIL: module = FMICAPI, log level = 5: Loaded FMU binary from /tmp/bruce/JModelica.org/jm_tmp9uvk5t3l/binaries/linux64/simple_sum2const.so',
'FMIL: module = FMICAPI, log level = 5: Loading functions for the co-simulation interface',
'FMIL: module = FMILIB, log level = 5: Successfully loaded all the interface functions',
'FMIL: module = FMI2XML, log level = 3: fmi2_xml_get_default_experiment_tolerance: returning default value, since no attribute was defined in modelDescription',
'FMIL: module = FMICAPI, log level = 5: Calling fmi2SetupExperiment',
'FMIL: module = FMICAPI, log level = 5: Calling fmi2EnterInitializationMode',
'FMIL: module = FMICAPI, log level = 5: Calling fmi2ExitInitializationMode',
'FMIL: module = Model, log level = 4: [info][FMU status:OK] getReal vr:0, value:5.000000',
'FMIL: module = Model, log level = 4: [info][FMU status:OK] getReal vr:1, value:0.000000',
'FMIL: module = Model, log level = 4: [info][FMU status:OK] CommunicationStepSize=0.02, LocalSolverStepSize=0.2',
'FMIL: module = Model, log level = 2: [error][FMU status:Error] Stepsize must be non-negative and divisible by 0.2',
'FMIL: module = Model, log level = 4: [info][FMU status:OK] CommunicationStepSize=1, LocalSolverStepSize=0.2',
'FMIL: module = Model, log level = 4: [info][FMU status:OK] Local solver will do 5 steps from t = 0.',
'FMIL: module = FMI2XML, log level = 3: fmi2_xml_get_default_experiment_tolerance: returning default value, since no attribute was defined in modelDescription',
'FMIL: module = FMICAPI, log level = 5: Calling fmi2SetupExperiment',
'FMIL: module = FMICAPI, log level = 5: Calling fmi2EnterInitializationMode',
'FMIL: module = FMICAPI, log level = 5: Calling fmi2ExitInitializationMode',
'FMIL: module = Model, log level = 4: [info][FMU status:OK] getReal vr:0, value:5.000000',
'FMIL: module = Model, log level = 4: [info][FMU status:OK] getReal vr:1, value:0.000000',
'FMIL: module = Model, log level = 4: [info][FMU status:OK] CommunicationStepSize=0.02, LocalSolverStepSize=0.2',
'FMIL: module = Model, log level = 2: [error][FMU status:Error] Stepsize must be non-negative and divisible by 0.2',
'FMIL: module = FMICAPI, log level = 5: Calling fmi2SetupExperiment',
'FMIL: module = FMI2XML, log level = 3: fmi2_xml_get_default_experiment_tolerance: returning default value, since no attribute was defined in modelDescription',
'FMIL: module = FMICAPI, log level = 5: Calling fmi2SetupExperiment',
'FMIL: module = FMICAPI, log level = 5: Calling fmi2EnterInitializationMode',
'FMIL: module = FMICAPI, log level = 5: Calling fmi2ExitInitializationMode',
'FMIL: module = Model, log level = 4: [info][FMU status:OK] getReal vr:0, value:5.000000',
'FMIL: module = Model, log level = 4: [info][FMU status:OK] getReal vr:1, value:0.000000',
'FMIL: module = Model, log level = 4: [info][FMU status:OK] CommunicationStepSize=0.02, LocalSolverStepSize=0.2',
'FMIL: module = Model, log level = 2: [error][FMU status:Error] Stepsize must be non-negative and divisible by 0.2',
'FMIL: module = FMICAPI, log level = 5: Calling fmi2EnterInitializationMode',
'FMIL: module = FMICAPI, log level = 5: Calling fmi2ExitInitializationMode',
'FMIL: module = FMICAPI, log level = 5: Calling fmi2EnterInitializationMode',
'FMIL: module = FMICAPI, log level = 5: Calling fmi2ExitInitializationMode',
'FMIL: module = FMI2XML, log level = 3: fmi2_xml_get_default_experiment_tolerance: returning default value, since no attribute was defined in modelDescription',
'FMIL: module = FMICAPI, log level = 5: Calling fmi2SetupExperiment',
'FMIL: module = FMICAPI, log level = 5: Calling fmi2EnterInitializationMode',
'FMIL: module = FMICAPI, log level = 5: Calling fmi2ExitInitializationMode',
'FMIL: module = Model, log level = 4: [info][FMU status:OK] getReal vr:0, value:5.000000',
'FMIL: module = Model, log level = 4: [info][FMU status:OK] getReal vr:1, value:0.000000',
'FMIL: module = Model, log level = 4: [info][FMU status:OK] CommunicationStepSize=0.02, LocalSolverStepSize=0.2',
'FMIL: module = Model, log level = 2: [error][FMU status:Error] Stepsize must be non-negative and divisible by 0.2',
'FMIL: module = FMI2XML, log level = 3: fmi2_xml_get_default_experiment_tolerance: returning default value, since no attribute was defined in modelDescription',
'FMIL: module = FMICAPI, log level = 5: Calling fmi2SetupExperiment',
'FMIL: module = FMICAPI, log level = 5: Calling fmi2EnterInitializationMode',
'FMIL: module = FMICAPI, log level = 5: Calling fmi2ExitInitializationMode',
'FMIL: module = Model, log level = 4: [info][FMU status:OK] getReal vr:0, value:5.000000',
'FMIL: module = Model, log level = 4: [info][FMU status:OK] getReal vr:1, value:0.000000',
'FMIL: module = Model, log level = 4: [info][FMU status:OK] CommunicationStepSize=0.004, LocalSolverStepSize=0.2',
'FMIL: module = Model, log level = 2: [error][FMU status:Error] Stepsize must be non-negative and divisible by 0.2']
I can't find any information on how to set stepsize, or what it should be. I tried
model.do_step(0, 1.0)
model.initialize(0, 0.1, True)
model.simulate_options() shows
{'ncp': 500,
'initialize': True,
'stop_time_defined': False,
'write_scaled_result': False,
'result_file_name': '',
'result_handling': 'binary',
'result_handler': None,
'result_store_variable_description': True,
'return_result': True,
'time_limit': None,
'filter': None,
'silent_mode': False}
How can I get this simple demo to work?

Related

"ValueError: max_evals=500 is too low for the Permutation explainer" shap answers me do I have to give more data (photos)?

I want to test the explainability of a multiclass semantic segmentation model, deeplab_v3plus with shap to know which features contribute the most to semantic classification. However I have a ValueError: max_evals=500 is too low when running my file, and I struggle to understand the reason.
import glob
from PIL import Image
import torch
from torchvision import transforms
from torchvision.utils import make_grid
import torchvision.transforms.functional as tf
from deeplab import deeplab_v3plus
import shap
def test(args):
# make a video prez
model = deeplab_v3plus('resnet101', num_classes=args.nclass, output_stride=16, pretrained_backbone=True)
model.load_state_dict(torch.load(args.seg_file,map_location=torch.device('cpu'))) # because no gpu available on sandbox environnement
model = model.to(args.device)
model.eval()
explainer = shap.Explainer(model)
with torch.no_grad():
for i, file in enumerate(args.img_folder):
img = img2tensor(file, args)
pred = model(img)
print(explainer(img))
if __name__ == '__main__':
class Arguments:
def __init__(self):
self.device = torch.device("cuda:1" if torch.cuda.is_available() else "cpu")
self.seg_file = "Model_Woodscape.pth"
self.img_folder = glob.glob("test_img/*.png")
self.mean = [0.485, 0.456, 0.406]
self.std = [0.229, 0.224, 0.225]
self.h, self.w = 483, 640
self.nclass = 10
self.cmap = {
1: [128, 64, 128], # "road",
2: [69, 76, 11], # "lanemarks",
3: [0, 255, 0], # "curb",
4: [220, 20, 60], # "person",
5: [255, 0, 0], # "rider",
6: [0, 0, 142], # "vehicles",
7: [119, 11, 32], # "bicycle",
8: [0, 0, 230], # "motorcycle",
9: [220, 220, 0], # "traffic_sign",
0: [0, 0, 0] # "void"
}
args = Arguments()
test(args)
But it returns:
(dee_env) jovyan#jupyter:~/use-cases/Scene_understanding/Code_Woodscape/deeplab_v3+$ python test_shap.py
BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
Traceback (most recent call last):
File "/home/jovyan/use-cases/Scene_understanding/Code_Woodscape/deeplab_v3+/test_shap.py", line 85, in <module>
test(args)
File "/home/jovyan/use-cases/Scene_understanding/Code_Woodscape/deeplab_v3+/test_shap.py", line 37, in test
print(explainer(img))
File "/home/jovyan/use-cases/Scene_understanding/Code_Woodscape/deeplab_v3+/dee_env/lib/python3.9/site-packages/shap/explainers/_permutation.py", line 82, in __call__
return super().__call__(
File "/home/jovyan/use-cases/Scene_understanding/Code_Woodscape/deeplab_v3+/dee_env/lib/python3.9/site-packages/shap/explainers/_explainer.py", line 266, in __call__
row_result = self.explain_row(
File "/home/jovyan/use-cases/Scene_understanding/Code_Woodscape/deeplab_v3+/dee_env/lib/python3.9/site-packages/shap/explainers/_permutation.py", line 164, in explain_row
raise ValueError(f"max_evals={max_evals} is too low for the Permutation explainer, it must be at least 2 * num_features + 1 = {2 * len(inds) + 1}!")
ValueError: max_evals=500 is too low for the Permutation explainer, it must be at least 2 * num_features + 1 = 1854721!
In the source code it looks like it's because I don't give enough arguments. I only have three images in my test_img/* folder, is that why?
I have the same issue. A possible solution I found which seems to be working for my case is to replace this line
explainer = shap.Explainer(model)
With this line
explainer = shap.explainers.Permutation(model, max_evals = 1854721)
shap.Explainer by default has algorithm='auto'. From the documentation: shape.Explainer
By default the “auto” options attempts to make the best choice given
the passed model and masker, but this choice can always be overriden
by passing the name of a specific algorithm.
Since 'permutation' has been selected you can directly use shap.explainers.Permutation and set max_evals to the value suggested in the error message above.
Given the high number of your use case, this might take a really long time. I would suggest to use an easier model just for testing the above solution.

ValueError: Could not interpret optimizer identifier: <tensorflow.python.keras.optimizers.SGD object at 0x0000013887021208>

I try to run this code and I have this error, Please any one had the same error in the past:
sgd = optimizers.SGD(lr = 0.01, decay = 1e-6, momentum = 0.9, nesterov = True)
Compile model
model.compile(optimizer = sgd, loss = OBJECTIVE_FUNCTION, metrics = LOSS_METRICS)
fit_history = model.fit_generator(
train_generator,
steps_per_epoch=STEPS_PER_EPOCH_TRAINING,
epochs = NUM_EPOCHS,
validation_data=validation_generator,
validation_steps=STEPS_PER_EPOCH_VALIDATION,
callbacks=[cb_checkpointer, cb_early_stopper]
)
model.load_weights("../working/best.hdf5")
Now I have this error:
File "", line 1, in runfile('C:/Users/ResNet50VF72.py', wdir='C:/Users/RESNET')
File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile execfile(filename, namespace)
File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/RESNET/ResNet50VF72.py", line 110, in model.compile(optimizer = sgd, loss = OBJECTIVE_FUNCTION, metrics = LOSS_METRICS)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 96, in compile self.optimizer = optimizers.get(optimizer)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\optimizers.py", line 793, in get str(identifier))
ValueError: Could not interpret optimizer identifier : <tensorflow.python.keras.optimizers.SGD object at 0x0000013887021208>
I had the same issue with another optimizer:
ValueError: Could not interpret optimizer identifier: <tensorflow.python.keras.optimizers.Adam object at 0x7f3fc4575ef0>
This was because I created my model using keras and not tensorflow.keras, the solution was switching from:
from keras.models import Sequential
to
from tensorflow.keras.models import Sequential
Or one could also use only keras and not tensorflow.keras (I was mixing old and new code), it seems it is the mixing of the two which causes issues (which shouldn't be a surprise).
You should import like this :
from keras.optimizers import gradient_descent_v2
and set your hyperparameters like this :
opt = gradient_descent_v2.SGD(learning_rate=lr, decay=lr/epochs)
reference:
https://programmerah.com/keras-nightly-import-package-error-cannot-import-name-adam-from-keras-optimizers-29815/

Loading a pretrained model fails when multiple GPU was used for training

I have trained a network model and saved its weights and architecture via checkpoint = ModelCheckpoint(filepath='weights.hdf5') callback. During training, I am using multiple GPUs by calling the funtion below:
def make_parallel(model, gpu_count):
def get_slice(data, idx, parts):
shape = tf.shape(data)
size = tf.concat([ shape[:1] // parts, shape[1:] ],axis=0)
stride = tf.concat([ shape[:1] // parts, shape[1:]*0 ],axis=0)
start = stride * idx
return tf.slice(data, start, size)
outputs_all = []
for i in range(len(model.outputs)):
outputs_all.append([])
#Place a copy of the model on each GPU, each getting a slice of the batch
for i in range(gpu_count):
with tf.device('/gpu:%d' % i):
with tf.name_scope('tower_%d' % i) as scope:
inputs = []
#Slice each input into a piece for processing on this GPU
for x in model.inputs:
input_shape = tuple(x.get_shape().as_list())[1:]
slice_n = Lambda(get_slice, output_shape=input_shape, arguments={'idx':i,'parts':gpu_count})(x)
inputs.append(slice_n)
outputs = model(inputs)
if not isinstance(outputs, list):
outputs = [outputs]
#Save all the outputs for merging back together later
for l in range(len(outputs)):
outputs_all[l].append(outputs[l])
# merge outputs on CPU
with tf.device('/cpu:0'):
merged = []
for outputs in outputs_all:
merged.append(merge(outputs, mode='concat', concat_axis=0))
return Model(input=model.inputs, output=merged)
With the testing code:
from keras.models import Model, load_model
import numpy as np
import tensorflow as tf
model = load_model('cpm_log/deneme.hdf5')
x_test = np.random.randint(0, 255, (1, 368, 368, 3))
output = model.predict(x = x_test, batch_size=1)
print output[4].shape
I got the error below:
Traceback (most recent call last):
File "cpm_test.py", line 5, in <module>
model = load_model('cpm_log/Jun5_1000/deneme.hdf5')
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 240, in load_model
model = model_from_config(model_config, custom_objects=custom_objects)
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 301, in model_from_config
return layer_module.deserialize(config, custom_objects=custom_objects)
File "/usr/local/lib/python2.7/dist-packages/keras/layers/__init__.py", line 46, in deserialize
printable_module_name='layer')
File "/usr/local/lib/python2.7/dist-packages/keras/utils/generic_utils.py", line 140, in deserialize_keras_object
list(custom_objects.items())))
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 2378, in from_config
process_layer(layer_data)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 2373, in process_layer
layer(input_tensors[0], **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 578, in __call__
output = self.call(inputs, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keras/layers/core.py", line 659, in call
return self.function(inputs, **arguments)
File "/home/muhammed/DEV_LIBS/developments/mocap/pose_estimation/training/cpm/multi_gpu.py", line 12, in get_slice
def get_slice(data, idx, parts):
NameError: global name 'tf' is not defined
By inspecting the error output, I decide that the problem is with the parallelization code. However, I can't resolve the issue.
You may need to use custom_objects to enable loading of the model.
import tensorflow as tf
model = load_model('model.h5', custom_objects={'tf': tf,})

XGBoost: xgb.importance feature map

I am getting the below error when I am trying to use the following code.
******Code******
importance = bst.get_fscore(fmap='xgb.fmap')
importance = sorted(importance.items(), key=operator.itemgetter(1))
******Error******
File "scripts/xgboost_bnp.py", line 225, in <module>
importance = bst.get_fscore(fmap='xgb.fmap')
File "/usr/lib/python2.7/site-packages/xgboost/core.py", line 754, in get_fscore
trees = self.get_dump(fmap)
File "/usr/lib/python2.7/site-packages/xgboost/core.py", line 740, in get_dump
ctypes.byref(sarr)))
File "/usr/lib/python2.7/site-packages/xgboost/core.py", line 92, in _check_call
raise XGBoostError(_LIB.XGBGetLastError())
xgboost.core.XGBoostError: can not open file "xgb.fmap"
The error is raised because you are calling get_fscore with an optional parameter fmap stating that feature importance of each feature should be fetched from a feature map file called xgb.fmap, which does not exist in your file system.
Here is a function returning sorted feature names and their importances:
import xgboost as xgb
import pandas as pd
def get_xgb_feat_importances(clf):
if isinstance(clf, xgb.XGBModel):
# clf has been created by calling
# xgb.XGBClassifier.fit() or xgb.XGBRegressor().fit()
fscore = clf.booster().get_fscore()
else:
# clf has been created by calling xgb.train.
# Thus, clf is an instance of xgb.Booster.
fscore = clf.get_fscore()
feat_importances = []
for ft, score in fscore.iteritems():
feat_importances.append({'Feature': ft, 'Importance': score})
feat_importances = pd.DataFrame(feat_importances)
feat_importances = feat_importances.sort_values(
by='Importance', ascending=False).reset_index(drop=True)
# Divide the importances by the sum of all importances
# to get relative importances. By using relative importances
# the sum of all importances will equal to 1, i.e.,
# np.sum(feat_importances['importance']) == 1
feat_importances['Importance'] /= feat_importances['Importance'].sum()
# Print the most important features and their importances
print feat_importances.head()
return feat_importances

Error: pandoc document conversion failed with error 127

|................................................................ | 98%
|.................................................................| 100%
output file: test.knit.md
/usr/lib/rstudio-server/bin/pandoc/pandoc +RTS -K512m -RTS test.utf8.md --to html --from markdown+autolink_bare_uris+ascii_identifiers+tex_math_single_backslash --output test.html --smart --email-obfuscation none --self-contained --standalone --section-divs --table-of-contents --toc-depth 3 --template /usr/lib64/R/library/rmarkdown/rmd/h/default.html --variable 'theme:cerulean' --include-in-header /tmp/Rtmp4CjD4R/rmarkdown-str6f6421e357bc.html --mathjax --variable 'mathjax-url:https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML' --highlight-style haddock
Error: pandoc document conversion failed with error 127
Occasionally, I run the codes and it completed 100% and created a md format file.
> traceback()
10: eval(expr, envir, enclos)
9: FUN(X[[i]], ...)
8: lapply(aesthetics, eval, envir = data, enclos = plot$plot_env)
7: f(..., self = self)
6: l$compute_aesthetics(d, plot)
5: f(l = layers[[i]], d = data[[i]])
4: by_layer(function(l, d) l$compute_aesthetics(d, plot))
3: ggplot_build(x)
2: print.ggplot(x)
1: function (x, ...)
UseMethod("print")(x)
There is no ggplot function run inside the RMarkdown but there is ggplot error when I tried to traceback(). However there is an error Error:127, any solution?
Googling the error message indicates that ‘status 127’ means that commands are not being found.