SwiftPM: How to setup Swift module.map referring to two connected C libraries - swift

I'm trying to build a Swift Package Manager system package (a module.modulemap)
making available two system C libraries where one includes the other.
That is, one (say libcurl) is a base module and the other C library is including
that (like so: #include "libcurl.h"). On the regular C side this works, because
the makefiles pass in proper -I flags and all is good (and I could presumably
do the same in SPM, but I'd like to avoid extra flags to SPM).
So what I came up with is this module map:
module CBase [system] {
header "/usr/include/curl.h"
link "curl"
export *
}
module CMyLib [system] {
use CBase
header "/usr/include/mylib.h"
link "mylib"
export *
}
I got importing CBase in a Swift package working fine.
But when I try to import CMyLib, the compiler complains:
error: 'curl.h' file not found
Which is kinda understandable because the compiler doesn't know where to look
(though I assumed that use CBase would help).
Is there a way to get this to work w/o having to add -Xcc -I flags to the
build process?
Update 1: To a degree this is covered in
Swift SR-145
and
SE-0063: SwiftPM System Module Search Paths.
The recommendation is to use the Package.swift pkgConfig setting. This seems to work OK for my specific setup. However, it is a chicken and egg if there is no .pc file. I tried embedding an own .pc file in the package, but the system package directory isn't added to the PKG_CONFIG_PATH (and hence won't be considered during the compilation of a dependent module). So the question stands: how to accomplish that in an environment where there libs are installed, but w/o a .pc file (just header and lib).

Related

Distributing pybind11 extension linked to third party libraries

I'm working on a pybind11 extension written in C++ but I'm having a hard time understanding how should it be distributed.
The project links to a number of third party libraries (e.g. libpng, glew etc.).
The project builds fine with CMAKE and it generates a .so file. Now I am not sure what is the right way of installing this extension. The extension seems to work, as if I try copy the file into the python lib directories it is picked up (I can import it, and it works correctly). However, this is clearly not the way to go I think.
I also tried the setuptools route (from https://pybind11.readthedocs.io/en/stable/compiling.html) by creating a setup.py files like this:
import sys
# Available at setup time due to pyproject.toml
from pybind11 import get_cmake_dir
from pybind11.setup_helpers import Pybind11Extension, build_ext
from setuptools import setup
from glob import glob
files = sorted(glob("*.cpp"))
__version__ = "0.0.1"
ext_modules = [
Pybind11Extension("mylib",
files,
# Example: passing in the version to the compiled code
define_macros = [('VERSION_INFO', __version__)],
),
]
setup(
name="mylib",
version=__version__,
author="fab",
author_email="fab#fab",
url="https://github.com/pybind/python_example",
description="mylib",
long_description="",
ext_modules=ext_modules,
extras_require={"test": "pytest"},
cmdclass={"build_ext": build_ext},
zip_safe=False,
python_requires=">=3.7",
)
and now I can build the extension by simply calling
pip3 install
however it looks like all the links are broken because whenever I try importing the extension in Python I get linkage errors, as if setuptools does not link correctly the extension with the 3rd party libs. For instance errors in linking with libpng as in:
>>> import mylib
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: /home/fabrizio/.local/lib/python3.8/site-packages/mylib.cpython-38-x86_64-linux-gnu.so: undefined symbol: png_sig_cmp
However I have no clue how to add this link info to setuptools, and don't even know if that's possible (it should be the setuptools equivalent of CMAKE's target_link_libraries).
I am really at a loss after weeks of reading documentation, forum threads and failed attempts. If anyone is able to point me in the right way or to clear some of the fog it would be really appreciated!
Thanks!
Fab
/home/fabrizio/.local/lib/python3.8/site-packages/mylib.cpython-38-x86_64-linux-gnu.so: undefined symbol: png_sig_cmp
This line pretty much says it clearly. Your local shared object file .so can't find the libpng.so against which it is linked.
You can confirm this by running:
ldd /home/fabrizio/.local/lib/python3.8/site-packages/mylib.cpython-38-x86_64-linux-gnu.so
There is no equivalent of target_link_libraries() in setuptools. Because that wouldn't make any sense. The library is already built and you've already linked it. This is your system more or less telling you that it can't find the libraries it needs. And those most likely need to be installed.
This is also one of the reasons why Linux distributions provide their own package managers and why you should use the developer packages provided by said distributions.
So how do you fix this? Well your .so file needs to find the other .so files against which you linked to understand how this works I will refer you to this link.
My main guess is based on the fact that when you manually copy the files it works - That during the build process you probably specify the rpath to a local directory. Hence what you most likely need to do is specify to your setuptools that it needs to copy those files when installing.

How do I get CMake to dynamically link a merged static library with system libraries?

I have to merge one of my app's libs with the NVIDIA CUDA static lib using this horrific awful CMake code:
GET_TARGET_PROPERTY(OUTPUT_LIB ${LIBNAME} LOCATION)
add_custom_command (TARGET ${LIBNAME}
POST_BUILD
COMMAND mv ${OUTPUT_LIB} ${OUTPUT_LIB}.old
COMMAND echo "create ${OUTPUT_LIB}" > combineLibs.mri
COMMAND echo "addlib ${OUTPUT_LIB}.old" >> combineLibs.mri
COMMAND echo "addlib ${CUDA_LOCATION}" >> combineLibs.mri
COMMAND echo "save" >> combineLibs.mri
COMMAND echo "end" >> combineLibs.mri
COMMAND ar -M <combineLibs.mri
COMMAND rm ${OUTPUT_LIB}.old
COMMENT "Building merged library for ${LIBNAME} at ${OUTPUT_LIB}, including ${CUDA_LOCATION}"
)
target_link_libraries(${LIBNAME} -pthread -c)
This successfully produces a merged static library that has all the symbols in it. However, the NVIDIA CUDA static lib brought with it dependencies on libpthread and libc in the form of unresolved symbols. Now the merged library also has those unresolved symbols, and the target_link_libraries line doesn't seem to do what I seem to think it does, because the symbols don't get resolved at link-time. How do I get the merged static library to dynamically link against libpthread and libc?
The the target_link_libraries line does indeed not do what you think.
target_link_libraries(target,options) can have the desired effect of
adding the linker options options to the linkage of target only if target
is something that is produced by the linker. If no linkage happens in the
production of target then this directive will have no effect.
Your target is a static library. A static library - unlike a program, and unlike
a dynamic/shared library - is not produced by the linker. As your custom_command
in fact illustrates, a static library is produced by the GNU general purpose archiver,
ar. It is nothing but an archive of files which happen to be object files,
but as far as ar is concerned they might as well be the contents of your
Documents, Pictures and Music folders. Since no linkage is involved in the
production of a static library, nothing can be linked with a static library.
An ar archive can be used as a linker input in the linkage of something that
is produced by the linker - a program or a shared library. In that case the
linker will look into the archive to see if contains any object files it needs
to carry on the linkage. If it finds any, it will extract them from the archive
and link them into the program. The linkage will be exactly the same as if
you had listed the required object files in the linker commandline and not
mentioned the archive at all.
But if any of the object files that the linker extracts from an archive bring
with them undefined references, then to get them resolved you must link some
library or libraries that define those references in the linkage of the
program or shared library that you want the linker to produce - just as you
must do to resolve undefined references in any other object files you
input to the linkage.
So,
How do I get the merged static library to dynamically link against libpthread and libc?
You can't. It doesn't make sense. Any library dependencies of object files
in a static library can be satisfied only in the linkage of a program or shared library
that has acquired those dependencies by linking those object files.
Finally, -c is not a GCC linkage option that will have the effect of requesting
linkage of libc. It is not a linkage option at all. It is an option that
directs the GCC frontend not to invoke the linker. It is passed to GCC to
request compilation without linkage, and the perverse effect of including it in a
CMake target_link_libraries directive will be to stop any linkage of the
target from happening.
If you want to explicitly request linkage of libc, use -lc, following
the linker usage protocol that -lname requests linkage of libname.
Perhaps you inferred that -c requests linkage of libc from the assumption
that -pthread requests linkage of libpthread. In fact, -lpthread would
request linkage of libpthread. The option -pthread is a more abstract GCC
option, for both compilation and linkage, that means do the right things, for this platform, to link with the Posix Threads
library - which might entail passing -lpthead to the linker, and possibly not.
Thus -pthread is OK as an argument of target_link_libraries that will
have the effect of requesting Posix Threads linkage, but see
answers to cmake and libpthread
for CMake-proper ways of doing this.

How to import library into swift code used in CLI?

What I want to do
I want to try to get son file through API by swift library on CLI.
I am using the library, Alamofire, but I can't understand how to import swift library.
Maybe some tasks are needed before import (building? linking?) but I can't find how to document...
Could you teach me how to do it?
Problem
$ swift getAPI.swift
getAPI.swift:1:8: error: no such module 'Alamofire'
import Alamofire
^
Sourcecode
import Alamofire
func getArticles(){
Alamofire.request(.GET, "https://qiita.com/api/v2/items")
.responseJSON { response in
print(response.result.value )
}
}
getArticles()
This should help with running one-off scripts and when using the Swift REPL. However, for building simple tools for the command line, I would recommend using the Swift Package Manager since it takes care of the linking and dependency management for you.
To link your script code to the Alamofire library, the swift compiler has to know where to the library is located. By default it searches /Library/Frameworks, but we can supply options to tell it where else to look. Checking swift --help, we see the following options (among many others).
USAGE: swift [options] <inputs>
OPTIONS:
-F <value> Add directory to framework search path
-I <value> Add directory to the import search path
-L <value> Add directory to library link search path
The directory you supply must to contain the compiled binaries of the libraries you want to import.
To import .swiftmodule binaries, use -I
To import .framework binaries, use -F
For linking to libraries like libsqlite3.0.dylib, use -L
I think Carthage and CocoaPods will build frameworks, whereas the Swift Package Manager will build .swiftmodules. All three should put the binaries in very predictable locations. Different kinds of binaries may all be in the same and that's okay.
Putting it all together, if you build with SPM, your invocation might look like this:
$ swift -I .build/debug/
But if you manage dependencies with Carthage, it might look something like this:
$ swift -F ./Carthage/Build/iOS
For further reading, I found these resources useful:
krakendev.io/blog/scripting-in-swift
realm.io/news/swift-scripting/
Update: The Swift package manager can now take care of this for you as well! If you're writing a script as a package and include a bunch of dependencies, like Alamofire, you can now test them out in the REPL. Just cd into your package directory and launch it using swift run --repl. See the swift.org blog for more details.

Setting module name to be different from directory name in SwiftPM

I have a Swift library with a core module plus optional bonus modules. I would like to use the following directory layout, mapping to exported Swift package names as shown:
Taco/
Source/
Core/ → import Taco
Toppings/ → import TacoToppings
SideDishes/ → import TacoSideDishes
To my eyes, that’s a sensible-looking project layout. However, if I’m reading the docs right, this will pollute the global module namespace with unhelpful names like “Core”. It seems that SwiftPM will only export a module whose name is identical to the directory name, and thus I have to do this:
Taco/
Source/
Taco/
TacoToppings/
TacoSideDishes/
Is there a way to configure Package.swift to use the tidier directory layout above and still export the desired module names?
Alternatively, is it possible to make the Core, Toppings, and SideDishes modules internal to the project, and export them all to the world as one big Taco module?
There is not currently a clean way to do this, but it seems like a reasonable request. I recommend filing an enhancement request at http://bugs.swift.org for it.
There is one "hacky" way you can do this:
Create your sources in the desired internal layout:
Sources/Core
Sources/Toppings
Add additional symbolic links for the desired module names:
ln -s Core Sources/Taco
ln -s Toppings Sources/TacoToppings
Add an exclude directive to the manifest to ignore the non-desired module name:
let package = Package(
name: "Taco",
exclude: ["Sources/Core", "Sources/Toppings"]
)
is it possible to make the Core, Toppings, and SideDishes modules internal to the project, and export them all to the world as one big Taco module?
No, unfortunately there is no way to do this currently, and it requires substantial compiler work to be able to support.

Why aren't other modules being compiled?

I have two files: Main.d and ImportMe.d. Their purposes should be self-explanatory. They are in the same directory, and have no explicit module declaration. When I try to compile Main.d, though, I get a "symbols not found" error!
$ dmd Main.d -I.
Undefined symbols:
"_D8ImportMe12__ModuleInfoZ", referenced from:
_D4Main12__ModuleInfoZ in Main.o
"_D8ImportMe8SayHelloFxAyaZv", referenced from:
__Dmain in Main.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
--- errorlevel 1
Compiling both files at the same time works fine.
$ dmd Main.d ImportMe.d
You don't have to do this with the standard library, though. What is it doing differently? Changing the include path via -I has no visible effect.
When you compile a module, dmd must have the .d or .di files for all of the modules that that module needs in its import path. -I allows you to add paths to the import path. However, that does not build those other modules. It just gives dmd what it needs to build the module that you requested it to build. And when you link, dmd needs either the object files or the library binaries for all of the modules being used in the program, otherwise it's going to complain about undefined symbols (-L can be used for linker flags if you want to link in libraries). The linking step uses the C linker, so it's not D-aware at all and doesn't know anything about modules.
So, if you compile and link in two steps, you first compile each module separately or together with other modules, generating either object files or library files, depending on the flags that you pass the compiler (object files are the default). You then link those object files and libraries together in the linking stage, generating the executable.
When you use dmd without passing it -c or -lib, it's going to do both the compiling and the linking together, so you must provide it all of the modules that you intend to compile, or when it gets to the linking step, it's going to complain about undefined symbols. It doesn't magically go and compile all of the modules that the modules that you ask it to compile import. If you want that sort of behavior, you need to use a tool such as rdmd.
dmd is able to find druntime and Phobos without you having to specify them because of dmd.conf (on Posix) or sc.ini (on Windows). That configuration file adds the appropriate .d and .di files to the import path and adds libphobos.a or phobos.lib (depending on the platform) to DFLAGS so that dmd can find those modules when compiling your modules and can link in the library in the linking phase. It also adds in any other flags that the standard library needs to work (such as linking in librt on Linux). If you move any of those files to non-standard places, it's that configuration file that you need to change to make it so that dmd can still find them.
You don't have to specify modules from the standard library because the compiler implicitly passes the precompiled standard library .lib file to the linker. For your own projects, consider using rdmd or another build tool.