Missing Python file for models using pyCommunicator - anylogic

My experience with Python is limited, but I just started looking at the new Python models included AnyLogic's examples. I am looking at the 1st one Passing Data Types. The model runs correctly with the set, modify and get functions working as expected. My question is there a python file somewhere that the communicator is working with? I only see the .alp file in the folder.
Thanks

I'm also beginning to use this, but as I understand, the idea of the python helper here (among other things) is that you can run python commands with Anylogic, so you actually don't need a python file. Nevertheless it uses python installed in your computer to run the scripts, if you don't have python installed, your model won't work.

Related

how to append a .py file in windows powershell?

I am a beginner at shell scripting and have been working on nlp using python . I am aware of IDE's available but wanted to explore a different approach.
For now i have been doing it manually and unable to figure out particular terms used in solutions provided

Distribute shell scripts using setuptools and pyproject.toml

I'm trying to distribute a shell script along with a Python package. Ideally, the shell script is installed when I run pip install my_package. I read from this SO that, my expected behavior is exactly what the scripts keyword of setuptools.setup provides. E.g. the script my_script will be installed with the following setup.py script:
setup(
...
scripts=['my_script'],
...
)
However, I cannot use the above method for two reasons:
the official doc did not mention this behavior. I don't know if I can continue to do this way.
my whole project is built on pyproject.toml, without setup.py. Although pyproject.toml has provided a [project.scripts] table, as explained in the setuptools official doc, the scripts can only be python functions instead of shell scripts.
For completeness, in my case, the shell script reads git status and sets environment variables, which will be read from within my python project. The shell script and my python project are bonded so tightly that I would rather not split them into two projects.
I have also tried to use a python function to execute the shell script, e.g.
[project.scripts]
my_script = 'my_project:my_func'
def my_func():
subprocess.run(...)
The problem with this solution is that every time I run my_script, my_project is loaded and the loading process is really slow.
Maybe a link in the comments leads to this information already. Anyway, I think it is worth posting that scripts = [...] in setup.py can be written in pyproject.toml as:
[tool.setuptools]
script-files = ["scripts/myscript1", "scripts/myscript2"]
However, this feature is deprecated. I hope the authors of the packaging tools will recognize the problem with shell scripts and deal with it.
Link: setuptools docs
I'm not exactly sure it will work for you case, but I solved this by creating a "shim" setup.py file (it has an added benefit of being able to install your project in edit mode).
It usually just calls setup(), but it was possible to pass the scripts argument:
"""Shim setup file to allow for editable install."""
from setuptools import setup
if __name__ == "__main__":
setup(scripts=["myscript"])
Everything else was loaded from pyproject.toml.

How to deploy application which uses pybind11?

I know pybind11 provides a way to call Python from C++. My question is, how can I distribute the application? For example, does user still need to install Python and Python packages on their machine?
I wish that if I use pybind11 , I can just put used Python scripts under my app folder, and called from C++. User doesn't need to install Python at all on his machine. Can pybind11 achieve this goal? Or can Python/C API or Boost.Python do that?
No, you'll have to install python. All of those packages, and ones like it, are language bindings between C++ and Python. Say you'd like to use a Python script in your C++ project and so you made some bindings for the Python script using Pybind11. When you run your C++ code, it'll use parts of the Python script which will be run in Python. What Pybind11 allows you to do is translate the input to and the output from the Python script, it does not reimplement it.

python script for validating files in eclipse

I have a self defined text editable (datafile) file format (which uses certain python types as dict, tuples, lists etc as well) for providing arguments data to my python scripts. These arguments are later used in my Main python script.
Currently, at the start of Main program, I am consolidating (using os.walk) all such datafiles and parsing them every time which takes a lot of time.
This is my issue!
Is there a mechanism in eclipse to run a python script (independent like a parser) and use above "datafile" as argument to check for syntax errors immediately after I save the file. So that I will not bother to check for syntax errors while running Main program.
Is this possible?
I am using Eclipse IDE with pydev for my development work.
Regards,

How to make sphinx look for modules in virtualenv while building html?

I want to build html docs using a virtualenv instead of the native environment on my machine.
I've entered the virtualenv but when I run make html I get errors saying the module can't be imported - I know the errors are due to the module being unavailable in my native environment.
How can I specify which environment should be used when searching for docs (eg the virtualenv)?
The problem is correctly spotted by Mathijs.
$ which sphinx-build
/usr/local/bin/sphinx-build
I solved this issue installing sphinx itself in the virtual environment.
With the environment activated:
$ source /home/migonzalvar/envs/myenvironment/bin/activate
$ pip install sphinx
$ which sphinx-build
/home/migonzalvar/envs/myenvironment/bin/sphinx-build
It seems neat enough.
The problem here is that make html uses the sphinx-build command as a normal shell command, which explicitly specifies which Python interpreter to use in the first line of the file (ie. #!/usr/bin/python). If Python gets invoked in this way, it will not use your virtual environment.
A quick and dirty way around this is by explicitly calling the sphinx-build Python script from an interpreter. In the Makefile, this can be achieved by changing SPHINXBUILD to the following:
SPHINXBUILD = python <absolute_path_to_sphinx-build-file>/sphinx-build
If you do not want to modify your Makefile you can also pass this parameter from the command line, as follows:
make html SPHINXBUILD='python <path_to_sphinx>/sphinx-build'
Now if you execute make build from within your VirtualEnv environment, it should use the Python interpreter from within your environment and you should see Sphinx finding all the goodies it requires.
I am well aware that this is not a neat solution, as a Makefile like this should not assume any specific location for the sphinx-build file, so any suggestions for a more suitable solution are warmly welcomed.
I had the same problem, but I couldn't use the accepted solution because I didn't use the Makefile. I was calling sphinx-build from within a custom python build file. What I really wanted to do was to call sphinx-build with the exact same environment that I was calling my python build script with. Fiddling with paths was too complicated and error prone, so I ended up with what seems to me like an elegant solution, which is to "manually" load the console script entry point and call it:
from pkg_resources import load_entry_point
cmd = load_entry_point('Sphinx', 'console_scripts', 'sphinx-build')
cmd(['sphinx-build', basepath, destpath])