I'm trying to simulate my Dymola FMU multiple times with PyFMI and the function do_step() after each run it is supposed to start from the beginning.
The first simulation runs as expected, but afterwards, the initialization fails / is stuck and stops:
"FMIL: module = b'Model', log level = 4: b'[][FMU status:OK] Non-linear solver will attempt to handle this problem.\n'
FMIL: module = b'Model', log level = 4: b'[][FMU status:OK] Residual was NaN or Inf.\n'"
"Pseudo Code":
model = load_fmu(r'path', kind="CS", log_level=4)
model.initialize()
model.setup_experiment(start_time=start_time)
for run in range(2):
for step in steps:
status = model.do_step([:], new_step=True)
model.setup_experiment(start_time=start_time)
model.initialize() # <- Fails
FMU created with Dymola 2022.
Has anyone had a similar problem? I didn't have issues with other FMUs and this python library.
Edit:
I have tried using FMpy where it runs no problem, but is much slower.
You need to reset the FMU before you can initialize it again. Use:
model.reset()
To reset the FMU. I.e.:
for run in range(2):
for step in steps:
status = model.do_step([:], new_step=True)
model.reset()
model.setup_experiment(start_time=start_time)
model.initialize() # <- Fails
You should reset the FMU with fmi2Reset before initializing again, see 2.1.6 of the FMI 2.0 specification https://github.com/modelica/fmi-standard/releases/download/v2.0.3/FMI-Specification-2.0.3.pdf
Related
I am trying to run an OpenModelica script using a DOS .bat file. But facing some issues.
The batch file runmodelica.bat has
%OPENMODELICAHOME%\bin\omc testscript.mos
The file testscript.mos is
loadModel(Modelica)
getErrorString()
loadFile("HelloWorld.mo")
simulate(HelloWorld)
getErrorString()
If I run the commands in the testscript.mos file from an OM Shell by changing to that directory, everything works fine. But If I run the batch file from the DOS prompt, I get the following error
Error processing file: testscript.mos
[C:/Users/blahblah/testscript.mos:2:1-2:1:writable] Error: Missing token: ASSIGN
# Error encountered! Exiting...
# Please check the error message and the flags.
Execution failed!
The HelloWorld.mo file comes with the standard installation and I haven't modified it
class HelloWorld
Real x(start = 1);
parameter Real a = 1;
equation
der(x) = - a * x;
end HelloWorld;
I am new to OpenModelica and searched online but couldn't find a solution. Any help is appreciated.
All the commands run in the OM Shell but not when invoked from the bat file.
The script code needs to be valid Modelica, so you need ; after each command.
loadModel(Modelica);
getErrorString();
loadFile("HelloWorld.mo");
simulate(HelloWorld);
getErrorString();
I am trying to run some code with pybullet. I am on windows 10, have the latest vscode, and I am using WSL remote on vscode with ubuntu 18.04 LTS. I have a GTX 2070 graphics card. I just want to see this work, I've been trying to fix it for the last 3 hours.
First, here is the code I am trying to run in WSL:
import numpy as np
import pybullet as pb
physicsClient = pb.connect (pb.GUI)
#load plane
import pybullet_data
pb.setAdditionalSearchPath(pybullet_data.getDataPath())
planeId = pb.loadURDF('plane.urdf')
#load visual shape
visualShapeId = pb.createVisualShape(
shapeType=pb.GEOM_MESH,
fileName='random_urdfs/000/000.obj',
rgbaColor=None,
meshScale=[0.1, 0.1, 0.1])
collisionShapeId = pb.createCollisionShape(
shapeType=pb.GEOM_MESH,
fileName='random_urdfs/000/000_coll.obj',
meshScale=[0.1, 0.1, 0.1])
multiBodyId = pb.createMultiBody(
baseMass=1.0,
baseCollisionShapeIndex=collisionShapeId,
baseVisualShapeIndex=visualShapeId,
basePosition=[0, 0, 1],
baseOrientation=pb.getQuaternionFromEuler([0, 0, 0]))
I get no errors, but the X server window will pop up (black) and close immediately. I read that you need to disable your GPU with WSL, but I am scared of messing up my PC. I would only want to disable it for when I need to see graphics / use the X server, not for all WSL applications.
Here is what shows in my bash script:
user#DESKTOP-######:~/program$ python3 openAI.py
pybullet build time: Sep 22 2020 00:54:31
startThreads creating 1 threads.
starting thread 0
started thread 0
argc=2
argv[0] = --unused
argv[1] = --start_demo_name=Physics Server
ExampleBrowserThreadFunc started
X11 functions dynamically loaded using dlopen/dlsym OK!
X11 functions dynamically loaded using dlopen/dlsym OK!
Creating context
Failed to create GL 3.3 context ... using old-style GLX context
Indirect GLX rendering context obtained
Making context current
GL_VENDOR=NVIDIA Corporation
GL_RENDERER=GeForce RTX 2070 SUPER/PCIe/SSE2
GL_VERSION=1.4 (4.6.0 NVIDIA 451.67)
GL_SHADING_LANGUAGE_VERSION=(null)
pthread_getconcurrency()=0
Version = 1.4 (4.6.0 NVIDIA 451.67)
Vendor = NVIDIA Corporation
Renderer = GeForce RTX 2070 SUPER/PCIe/SSE2
Segmentation fault (core dumped)
user#DESKTOP-######:~/program$
#Emilio, I have got this working without any changes to the GPU, using the following process:
I used the VcXsrv application set up in the same way as this tutorial : https://jack-kawell.com/2020/06/12/ros-wsl2/ where crucially Native openGL is unchecked.
Export your ip address as in the tutorial, however instead of 'export DISPLAY={your_ip_address}:0.0', go to the VcXsrv window (which should be blank at this point) and replace :0.0 with whatever number of display is given. So for Display DESKTOP-1234AB:1.0 you would enter 'export DISPLAY={your_ip_address}:1.0'
In the linux terminal enter: export LIBGL_ALWAYS_INDIRECT=0
You can check that this has made an effect by entering: glxinfo
Which should print out:
direct rendering: yes
When you run your python program it should open up in the VcXsrv window. For me there was no cursor visible but I could still interact with the object as if I did have a cursor.
I'm a beginner at OpenCV, and trying to run an open-source program.
http://asrl.utias.utoronto.ca/code/gpusurf/index.html
I currently have the Computer Vision Toolbox OpenCV Interface 20.1.0 installed and Computer Vision Toolbox 9.2.
I cannot run this simple open-source feature matching algorithm without encountering errors.
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
% read images
img1 = cv2.imread('[INSERT PATH #1]');
img2 = cv2.imread('[INSERT PATH #2]');
img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY);
img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY);
%sift
sift = cv2.xfeatures2d.SIFT_create();
keypoints_1, descriptors_1 = sift.detectAndCompute(img1,None);
keypoints_2, descriptors_2 = sift.detectAndCompute(img2,None);
len(keypoints_1), len(keypoints_2)
The following message is returned:
Error: File: Keypoints.m Line: 1 Column: 8
The import statement 'import cv2' cannot be found or cannot be imported. Imported names must end with '.*' or be
fully qualified.
However, when I remove Line 1, I instead get the following error.
Error: File: Keypoints.m Line: 2 Column: 8
The import statement 'import matplotlib.pyplot' cannot be found or cannot be imported. Imported names must end
with '.*' or be fully qualified.
Finally, following the error message only results in a sequence of further errors from the cv2 library. Any ideas?
That's because the code you've used isn't MATLAB code, it's python code.
As per the website you've linked:
From within Matlab
The parallel implementation coded in Matlab can be run by using the surf_find_keypoints() function. The output keypoints can be sorted by strength using surf_best_n_keypoints(), and plotted using surf_plot_keypoints().
Check that you've downloaded the correct files and try again.
Furthermore, the Matlab OpenCV Interface is designed to integrate C++ OpenCV code, not python. Documentations here.
Yes, it is correct that this is Python code. I would recommend checking your dependencies/libraries. The PyCharm IDE is what I personally use since it takes care of all the libraries easily.
If you do end up trying out PyCharm click on the red icon when hovering on CV2. It’ll then give you a prompt to download the library.
Edit:
Using Python some setup can be done. Using pip:
Install opencv-python
pip install opencv-python
Install opencv-contrib-python
pip install opencv-contrib-python
Unfortunately, there is some issue with the sift feature since by default it is excluded from newer free versions of OpenCV.
sift = cv2.xfeatures2d.SIFT_create() not working even though have contrib installed
import cv2
Image_1 = cv2.imread("Image_1.png", cv2.IMREAD_COLOR)
Image_2 = cv2.imread("Image_2.jpg", cv2.IMREAD_COLOR)
Image_1 = cv2.cvtColor(Image_1, cv2.COLOR_BGR2GRAY)
Image_2 = cv2.cvtColor(Image_2, cv2.COLOR_BGR2GRAY)
sift = cv2.SIFT_create()
keypoints_1, descriptors_1 = sift.detectAndCompute(Image_1,None)
keypoints_2, descriptors_2 = sift.detectAndCompute(Image_2,None)
len(keypoints_1), len(keypoints_2)
The error I received:
"/Users/michael/Documents/PYTHON/Test Folder/venv/bin/python" "/Users/michael/Documents/PYTHON/Test Folder/Testing.py"
Traceback (most recent call last):
File "/Users/michael/Documents/PYTHON/Test Folder/Testing.py", line 9, in <module>
sift = cv2.SIFT_create()
AttributeError: module 'cv2.cv2' has no attribute 'SIFT_create'
Process finished with exit code 1
OS : Ubuntu 14.04LTS
Language : Python Anaconda 2.7 (keras, theano)
GPU : GTX980Ti
CUDA : CUDA 7.5
I wanna run keras python code on IPython Notebook by using my GPU(GTX980Ti)
But I can't find it.
I want to test below code. When I run it on to Ubuntu terminal,
I command as below (It uses GPU well. It doesn't have any problem)
First I set the path like below
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
Second I run the code as below
THEANO_FLAGS='floatX=float32,device=gpu0,nvcc.fastmath=True' python myscript.py
And it runs well.
But when i run the code on pycharm(python IDE) or
When I run it on Ipython Notebook, It doesn't use gpu.
It only uses CPU
myscript.py code is as below.
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time
vlen = 10 * 30 * 768 # 10 x #cores x # threads per core
iters = 1000
rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in xrange(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
print('Used the cpu')
else:
print('Used the gpu')
To solve it, I force the code use gpu as below
(Insert two lines more on myscript.py)
import theano.sandbox.cuda
theano.sandbox.cuda.use("gpu0")
Then It generate the error like below
ERROR (theano.sandbox.cuda): nvcc compiler not found on $PATH. Check your nvcc installation and try again.
how to do it??? I spent two days..
And I surely did the way of using '.theanorc' file at home directory.
I'm using theano on an ipython notebook making use of my system's GPU. This configuration seems to work fine on my system.(Macbook Pro with GTX 750M)
My ~/.theanorc file :
[global]
cnmem = True
floatX = float32
device = gpu0
Various environment variables (I use a virtual environment(macvnev):
echo $LD_LIBRARY_PATH
/opt/local/lib:
echo $PATH
/Developer/NVIDIA/CUDA-7.5/bin:/opt/local/bin:/opt/local/sbin:/Developer/NVIDIA/CUDA-7.0/bin:/Users/Ramana/projects/macvnev/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
echo $DYLD_LIBRARY_PATH
/Developer/NVIDIA/CUDA-7.5/lib:/Developer/NVIDIA/CUDA-7.0/lib:
How I run ipython notebook (For me, the device is gpu0) :
$THEANO_FLAGS=mode=FAST_RUN,device=gpu0,floatX=float32 ipython notebook
Output of $nvcc -V :
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2015 NVIDIA Corporation
Built on Thu_Sep_24_00:26:39_CDT_2015
Cuda compilation tools, release 7.5, V7.5.19
From your post, probably you've set the $PATH variable wrong.
everyone!
I'm trying to parallelize an algorithm that uses mex files from mexopencv (KNearest.m, KNearest_.mexw32).
The program is based vlfeat (vlsift.mex32) + mexopencv (KNearest.m and KNearest_.mexw32).I classify descriptors obtained from images.
All the code is located on the fileshare
\\ LAB-07 \ untitled \ DISTRIB \ (this is the program code)
\\ LAB-07 \ untitled \ + cv (mexopencv)
When I run the program with matlabpool close everything works well.
Then I make matlabpool open (2 computers on 2 cores each. ultimately 4 worker, but now I use for testing only 2 workers on the computer and run the program which)
PathDependencises from fileshare -> \LAB-07\untitled\DISTRIB\ , \LAB-07\untitled+cv
Before parfor loop I train classifier on the local machine
classifiers = cv.KNearest
classifiers.train(Descriptors',Labels','MaxK',1)
Then run parfor
descr=vlsift(img);
PredictClasses = classifiers.predict(descr');
Error
Error in ==> KNearest>KNearest.find_nearest at 173
Invalid MEX-file '\\LAB-07\untitled\+cv\private\KNearest_.mexw32':
The specified module could not be found.
That is KNearest.m finds, but no KNearest_.mexw32. Because KNearest_.mexw32 located in private folder, I changed the code KNearest.m (everywhere where it appeal KNearest_ () changed to cv.KNearest_ (). Example: this.id = сv.KNearest_ ()) and placed in a folder with KNearest_.mexw32 KNearest.m. As a result, get the same error
Immediately after matlabpool open file search on workers
pctRunOnAll which ('KNearest.m')
'KNearest.m' not found.
'KNearest.m' not found.
'KNearest.m' not found.
pctRunOnAll which ('KNearest_.mexw32')
'KNearest_.mexw32' not found.
'KNearest_.mexw32' not found.
'KNearest_.mexw32' not found.
after cd \LAB-07\untitled+cv
pctRunOnAll which ('KNearest.m')
\\LAB-07\untitled\+cv\KNearest.m
\\LAB-07\untitled\+cv\KNearest.m % cv.KNearest constructor
\\LAB-07\untitled\+cv\KNearest.m
>> pctRunOnAll which ('KNearest_.mexw32')
\\LAB-07\untitled\+cv\KNearest_.mexw32
\\LAB-07\untitled\+cv\KNearest_.mexw32
\\LAB-07\untitled\+cv\KNearest_.mexw32
I ran and FileDependecies, but the same result.
I do not know this is related or not, I display during the execution of the program classifiers
after training and before parfor
classifiers =
cv.KNearest handle
Package: cv
Properties:
id: 5
MaxK: 1
VarCount: 128
SampleCount: 9162
IsRegression: 0
Methods, Events, Superclasses
Within parfor before classifiers.predict
classifiers =
cv.KNearest handle
Package: cv
Properties:
id: 5
I tested the file cvtColor.mexw32. I left in a folder only 2 files cvtColor.mexw32 and vl_sift
parfor i=1:2
im1=imread('Copy_of_start40.png');
im_vl = im2single(rgb2gray(im1));
desc=vl_sift(im_vl);
im1 = cvtColor(im1,'RGB2GRAY');
end
The same error, and vl_sift work, cvtColor no...
If the worker machines can see the code in your shared filesystem, you should not need FileDependencies or PathDependencies at all. It looks like you're using Windows. It seems to me that the most likely problem is file permissions. MDCS workers running under a jobmanager on Windows by default run not using your own account (they run using the "LocalSystem" account I think), and so may well simply not have access to files on a shared filesystem. You could try making sure your code is world-readable.
Otherwise, you can add the files to the pool by using something like
matlabpool('addfiledependencies', {'\\LAB-07\untitled\+cv'})
Note that MATLAB interprets directories with a + in as defining "packages", not sure if this is intentional in your case.
EDIT
Ah, re-reading your original post, and your comments below - I suspect the problem is that the workers cannot see the libraries on which your MEX file depends. (That's what the "Invalid MEX-file" message is indicating). You could use http://www.dependencywalker.com/ to work out what are the dependencies of your MEX file, and make sure they're available on the workers (I think they need to be on %PATH%, or in the current directory).
Edric thanks. There was a problem in the PATH for parfor. With http://www.dependencywalker.com/ looked missing files and put them in a folder +cv. Only this method works in parfor.
But predict in parfor gives an error
PredictClasses = classifiers.predict(descr');
??? Error using ==> parallel_function at 598
Error in ==> KNearest>KNearest.find_nearest at 173
Unexpected Standard exception from MEX file.
What() is:..\..\..\src\opencv\modules\ml\src\knearest.cpp:365: error: (-2) The search
tree must be constructed first using train method
I solved this problem by calling each time within parfor train
classifiers = cv.KNearest
classifiers.train(Descriptors',Labels','MaxK',1)
But it's an ugly solution :)