Translate F2PY compile steps into setup.py - distutils

I've inherited a Fortran 77 code which implements several subroutines which are run through a program block which requires a significant amount of user-input via an interactive command prompt every time the program is run. Since I'd like to automate running the code, I moved all the subroutines into a module and wrote a wrapper code through F2PY. Everything works fine after a 2-step compilation:
gfortran -c my_module.f90 -o my_module.o -ffixed-form
f2py -c my_module.o -m my_wrapper my_wrapper.f90
This ultimately creates three files: my_module.o, my_wrapper.o, my_module.mod, and my_wrapper.so. The my_wrapper.so is the module which I import into Python to access the legacy Fortran code.
My goal is to include this code to use in a larger package of scientific codes, which already has a setup.py using distutils to build a Cython module. Totally ignoring the Cython code for the moment, how am I supposed to translate the 2-step build into an extension in the setup.py? The closes I've been able to figure out looks like:
from numpy.distutils.core import setup, Extension
wrapper = Extension('my_wrapper', ['my_wrapper.f90', ])
setup(
libraries = [('my_module', dict(sources=['my_module.f90']],
extra_f90_compile_args=["-ffixed-form", ])))],
ext_modules = [wrapper, ]
)
This doesn't work, though. My compiler throws many warnings on the my_module.f90, but it still compiles (it throws no warnings if I use the compiler invocation above). When it tries to compile the wrapper though, it fails to find the my_module.mod, even though it is successfully created.
Any thoughts? I have a feeling I'm missing something trivial, but the documentation just doesn't seem fleshed out enough to indicate what it might be.

It might be a little late, but your problem is that you are not linking in my_module when building my_wrapper:
wrapper = Extension('my_wrapper', sources=['my_wrapper.f90'], libraries=['my_module'])
setup(
libraries = [('my_module', dict(sources=['my_module.f90'],
extra_f90_compile_args=["-ffixed-form"]))],
ext_modules = [wrapper]
)
If your only use of my_module is for my_wrapper, you could simply add it to the sources of my_wrapper:
wrapper = Extension('my_wrapper', sources=['my_wrapper.f90', 'my_module.f90'],
extra_f90_compile_args=["-ffixed-form"])
setup(
ext_modules = [wrapper]
)
Note that this will also export everything in my_module to Python, which you probably don't want.
I am dealing with such a two-layer library structure outside of Python, using cmake as the top level build system. I have it setup so that make python calls distutils to build the Python wrappers. The setup.pys can safely assume that all external libraries are already built and installed. This strategy is advantageous if one wants to have general-purpose libraries that are installed system-wide, and then wrapped for different applications such as Python, Matlab, Octave, IDL,..., which all have different ways to build extensions.

Related

Distributing pybind11 extension linked to third party libraries

I'm working on a pybind11 extension written in C++ but I'm having a hard time understanding how should it be distributed.
The project links to a number of third party libraries (e.g. libpng, glew etc.).
The project builds fine with CMAKE and it generates a .so file. Now I am not sure what is the right way of installing this extension. The extension seems to work, as if I try copy the file into the python lib directories it is picked up (I can import it, and it works correctly). However, this is clearly not the way to go I think.
I also tried the setuptools route (from https://pybind11.readthedocs.io/en/stable/compiling.html) by creating a setup.py files like this:
import sys
# Available at setup time due to pyproject.toml
from pybind11 import get_cmake_dir
from pybind11.setup_helpers import Pybind11Extension, build_ext
from setuptools import setup
from glob import glob
files = sorted(glob("*.cpp"))
__version__ = "0.0.1"
ext_modules = [
Pybind11Extension("mylib",
files,
# Example: passing in the version to the compiled code
define_macros = [('VERSION_INFO', __version__)],
),
]
setup(
name="mylib",
version=__version__,
author="fab",
author_email="fab#fab",
url="https://github.com/pybind/python_example",
description="mylib",
long_description="",
ext_modules=ext_modules,
extras_require={"test": "pytest"},
cmdclass={"build_ext": build_ext},
zip_safe=False,
python_requires=">=3.7",
)
and now I can build the extension by simply calling
pip3 install
however it looks like all the links are broken because whenever I try importing the extension in Python I get linkage errors, as if setuptools does not link correctly the extension with the 3rd party libs. For instance errors in linking with libpng as in:
>>> import mylib
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: /home/fabrizio/.local/lib/python3.8/site-packages/mylib.cpython-38-x86_64-linux-gnu.so: undefined symbol: png_sig_cmp
However I have no clue how to add this link info to setuptools, and don't even know if that's possible (it should be the setuptools equivalent of CMAKE's target_link_libraries).
I am really at a loss after weeks of reading documentation, forum threads and failed attempts. If anyone is able to point me in the right way or to clear some of the fog it would be really appreciated!
Thanks!
Fab
/home/fabrizio/.local/lib/python3.8/site-packages/mylib.cpython-38-x86_64-linux-gnu.so: undefined symbol: png_sig_cmp
This line pretty much says it clearly. Your local shared object file .so can't find the libpng.so against which it is linked.
You can confirm this by running:
ldd /home/fabrizio/.local/lib/python3.8/site-packages/mylib.cpython-38-x86_64-linux-gnu.so
There is no equivalent of target_link_libraries() in setuptools. Because that wouldn't make any sense. The library is already built and you've already linked it. This is your system more or less telling you that it can't find the libraries it needs. And those most likely need to be installed.
This is also one of the reasons why Linux distributions provide their own package managers and why you should use the developer packages provided by said distributions.
So how do you fix this? Well your .so file needs to find the other .so files against which you linked to understand how this works I will refer you to this link.
My main guess is based on the fact that when you manually copy the files it works - That during the build process you probably specify the rpath to a local directory. Hence what you most likely need to do is specify to your setuptools that it needs to copy those files when installing.

How to debug unit test while developping a package in Julia

Say I develop a package with a limited set of dependencies (for example, LinearAlgebra).
In the Unit testing part, I might need additional dependencies (for instance, CSV to load a file). I can configure that in the Project.toml all good.
Now from there and in VS Code, how can I debug the Unit tests? I tried running the "runtests.jl" in the debugger; however, it unsurprisingly complains that the CSV package is unavailable.
I could add the CSV package (as a temporary solution), but I would prefer that the debugger run with the configuration for the unit testing; how can I achieve that?
As requested, here is how it can be reproduced (it is not quite minimal, but instead I used a commonly used package as it give confidence the package is not the problem). We will use DataFrames and try to execute the debugger for its unit tests.
Make a local version of DataFrames for the purpose of developing a feature in it. I execute dev DataFrames in a new REPL.
Select the correct environment (in .julia/dev/DataFrames) through the VS-code user interface.
Execute the "proper" unit testing by executing test DataFrames at the pkg prompt. Everything should go smoothly.
Try to execute the tests directly (open the runtests.jl and use the "Run" button in vs-code). I see some errors of the type:
LoadError: ArgumentError: Package CategoricalArrays not found in current path:
- Run `import Pkg; Pkg.add("CategoricalArrays")` to install the CategoricalArrays package.
which is consistent with CategoricalArrays being present in the [extras] section of the Project.toml but not present in the [deps].
Finally, instead of the "Run" command, execute the "Run and Debug". I encounter similar errors here is the first one:
Test Summary: | Pass Total
merge | 19 19
PASSED: index.jl
FAILED: dataframe.jl
LoadError: ArgumentError: Package DataStructures not found in current path:
- Run `import Pkg; Pkg.add("DataStructures")` to install the DataStructures package.
So I can't debug the code after the part requiring the extras packages.
After all that I delete this package with the command free DataFrames at the pkg prompt.
I see the same behavior in my package.
I'm not certain I understand your question, but I think you might be looking for the TestEnv package. It allows you to activate a temporary environment containing the [extras] dependencies. The discourse announcement contains a good description of the use cases.
Your runtest.jl file should contain all necessary imports to run tests.
Hence you are expected to have in your runtests.jl file lines such as:
using YourPackageName
using CSV
# the lines with tests now go here.
This is a standard in Julia package layout. For an example have a look at any mature Julia such as DataFrames.jl (https://github.com/JuliaData/DataFrames.jl/blob/main/test/runtests.jl).

SwiftPM: How to setup Swift module.map referring to two connected C libraries

I'm trying to build a Swift Package Manager system package (a module.modulemap)
making available two system C libraries where one includes the other.
That is, one (say libcurl) is a base module and the other C library is including
that (like so: #include "libcurl.h"). On the regular C side this works, because
the makefiles pass in proper -I flags and all is good (and I could presumably
do the same in SPM, but I'd like to avoid extra flags to SPM).
So what I came up with is this module map:
module CBase [system] {
header "/usr/include/curl.h"
link "curl"
export *
}
module CMyLib [system] {
use CBase
header "/usr/include/mylib.h"
link "mylib"
export *
}
I got importing CBase in a Swift package working fine.
But when I try to import CMyLib, the compiler complains:
error: 'curl.h' file not found
Which is kinda understandable because the compiler doesn't know where to look
(though I assumed that use CBase would help).
Is there a way to get this to work w/o having to add -Xcc -I flags to the
build process?
Update 1: To a degree this is covered in
Swift SR-145
and
SE-0063: SwiftPM System Module Search Paths.
The recommendation is to use the Package.swift pkgConfig setting. This seems to work OK for my specific setup. However, it is a chicken and egg if there is no .pc file. I tried embedding an own .pc file in the package, but the system package directory isn't added to the PKG_CONFIG_PATH (and hence won't be considered during the compilation of a dependent module). So the question stands: how to accomplish that in an environment where there libs are installed, but w/o a .pc file (just header and lib).

How do I get information about compiler (version) that is used by Cython and f2py in IPython?

does anyone know if there is a way to print the compiler (and its version) that is used when I use the Fortran magic and Cython magic in IPython
For example, like the compiler that was used to build Python: platform.python_compiler()
There are probably better ways to do this, but here are two quick ones.
For Cython, the first thing that came to mind was to make a Cython file that passes the Cython compiler and causes an error at the C level.
Here's a simple one.
cdef extern from "nosuchheader.h":
void myfakefunction(int a, double b)
On my computer IPython shows an error from distutils saying that "gcc failed with exit status 1".
I don't currently use the %%fortran magic, but you should be able to see what f2py is doing based on its output.
f2py usually shows which compiler it is using, both when it searches for a compiler, and when it actually calls the Fortran compiler.
To figure that out, I'd recommend compiling some snippet of Fortran code via f2py and looking at the output.
On my windows machine it shows the output as f2py searches for a Fortran compiler, and prints the lines
'Found executable C:\\mingw64\\bin\\gfortran.exe',
'Found executable C:\\mingw64\\bin\\gfortran.exe',
This tells me it is using gfortran.
Further down in the output it also shows the commands used to build the Fortran source code.
The documentation for the fortran magic mentions how to get verbose output.
If you pass the flag -vvv to the fortran magic, it will print the output from f2py.
You could also try looking at the %fortran_config magic mentioned in the documentation.

Boost.Python __init__() should return None, not 'NoneType'

I have a whole bunch of working C++ code that I want to write Python bindings for. I'm trying to use Boost.Python since it seems to be the easiest way to get this working, but it isn't cooperating. Here's part of the code for the extension module I'm trying to build:
BOOST_PYTHON_MODULE(libpcap_ext) {
using namespace boost::python;
class_<PacketEngine>("PacketEngine")
.def("getAvailableDevices", &PacketEngine_getAvailableDevices);
}
Bjam seems to be a pain and refuses to recognize my Pythonpath or allow me to link with libpcap, so I'm using CMake. Here's my CMakeLists file, which can import and build everything just fine (outputs libpcap.so as expected):
CMAKE_MINIMUM_REQUIRED(VERSION 2.8)
IF(NOT CMAKE_BUILD_TYPE)
SET(CMAKE_BUILD_TYPE "DEBUG")
#SET(CMAKE_BUILD_TYPE "RELEASE")
#SET(CMAKE_BUILD_TYPE "RELWITHDEBINFO")
#SET(CMAKE_BUILD_TYPE "MINSIZEREL")
ENDIF()
FIND_PACKAGE(Boost 1.55.0)
find_package(PythonLibs REQUIRED)
IF(Boost_FOUND)
INCLUDE_DIRECTORIES("${Boost_INCLUDE_DIRS}" "${PYTHON_INCLUDE_DIRS}")
SET(Boost_USE_STATIC_LIBS OFF)
SET(Boost_USE_MULTITHREADED ON)
SET(Boost_USE_STATIC_RUNTIME OFF)
FIND_PACKAGE(Boost 1.55.0 COMPONENTS python)
ADD_LIBRARY(pcap_ext MODULE PacketWarrior/pcap_ext.cc PacketWarrior/PacketEngine.h PacketWarrior/PacketEngine.cc PacketWarrior/Packet.h PacketWarrior/Packet.cc)
TARGET_LINK_LIBRARIES(pcap_ext pcap)
TARGET_LINK_LIBRARIES(pcap_ext ${Boost_LIBRARIES} ${PYTHON_LIBRARIES})
ELSEIF(NOT Boost_FOUND)
MESSAGE(FATAL_ERROR "Unable to find correct Boost version. Did you set BOOST_ROOT?")
ENDIF()
ADD_DEFINITIONS("-Wall")
And my pcap.py file that attempts to utilize the module:
import libpcap_ext
engine = libpcap_ext.PacketEngine()
print engine.getAvailableDevices()
But whenever I try to run the module, I get the following error:
Traceback (most recent call last):
File "../pcap.py", line 2, in <module>
engine = libpcap_ext.PacketEngine()
TypeError: __init__() should return None, not 'NoneType
I'm assuming it's because Boost.Python is trying to use Python 3 and my system default is Python 2.7.3. I've tried changing my user-config.jam file (in my boost_1_55_0 directory) to point to Python 2.7 and tried building:
# Configure specific Python version.
# using python : 2.7 : /usr/bin/python2.7 : /usr/include/python2.7 : /usr/lib ;
Boost.Python's installation instructions [0] seem to fail for me when I try to build quickstart with bjam (lots of warnings), so I tried following the Boost Getting Started instructions [1] to build a Python header binary, which is I think what is causing this problem. Any recommendations as to how to fix this would be amazing, I've spent hours on this.
This error is probably due to linking against the wrong Python library. Make sure your extension as well as the Boost Python library are linked against the Python installation you are using to import the module.
On Linux you can check against which libraries you've linked with ldd. On OS X otool -L does the same thing. So, for example
otool -L libpcap_ext.so
otool -L /path/to/libboost_python-mt.dylib
should list the Python library they are linked against.
With CMake you can use the variable PYTHON_LIBRARY to change which Python library is used. As an example, on the command line you can set it with
cmake -DPYTHON_LIBRARY="/path/to/libpython2.7.dylib" source_dir
Lastly, on OS X a quick and dirty way (i.e. without recompiling) to change the dynamically linked libraries is install_name_tool -change.