Does Swift executable binary need the .swiftmodule, .swiftdoc and .build files to run? - swift

I'm writing my Swift app for Ubuntu using Vapor. And my mission is to have the smallest Docker image for production. I've trimmed down my image significantly but I wanted to know, just out of curiosity, does my final executable need all the compiled .module, .doc and .build files in the same directory?

tl;dr: No.
The folders/files you listed are byproducts of the build process and can be safely discarded.
When it comes to distribution, your application is just like any other Linux executable. You must have all dynamically linked libraries available on the target system.
These include the runtime libraries of the Swift toolchain plus any compiled C modules your application (or the framework beneath it) links with (*).
You can check the dependencies of the executable using the ldd command.
Some of them are available as packages, some of them will need to be copied to the target system manually.
(*) In case of a Vapor 2 application, such C modules are libCHTTP.so and libCSQLite.so, which are placed in your build folder.

Related

Yocto deploy Debug or Release prebuild?

I am writing a bitbake recipe to deploy a third party pre-built tool, similar to this wiki page: https://wiki.yoctoproject.org/wiki/TipsAndTricks/Packaging_Prebuilt_Libraries
However, I have a Release and Debug pre-build versions of the tool available as *.so files. How do I distinguish inside the recipe which one of both build types I shall deploy?
Thanks and regards,
Martin
You can have two different virtual recipes each with their own .so file. This then warrants a selection in a configuration file (with PREFERRED_PROVIDER_virtual/my-recipe), so either in a machine or distro configuration file. This is probably preferred if you consider having release and debug distros.
A second option is to install the libraries in two different paths, in two different PACKAGES (use FILES_my-package for that) and make them RCONFLICTS_my-package each other to be sure they can't both be in the rootfs. After that, you could write a pkg_postinst_my-package() task specific to each package that actually move the library from the "different" path to the intended one. This will be run both at build time when creating the rootfs and at runtime on first boot, so you need to make sure to exclude one or the other (it's usually done by checking if ${D} exists, which does at build time but not runtime).
c.f.: http://docs.yoctoproject.org/dev-manual/dev-manual-common-tasks.html#post-installation-scripts
If you can manage to have both libraries installed in your rootfs and select the one you want with the LIBRARY_PATH environment variable, a simple recipe, with two packages with each library in a different location, will be sufficient.

How to add Java sub directory to Resources directory of mac app bundle

I would like to use install4j to make it easier to deploy my Java application to Windows, Mac, and Linux. I am evaluating install4j on my Windows development machine to make sure it can do what I need before I purchase it.
So far, I can get it to work for Windows and Linux but not for the Mac. The Mac app bundle that I cobbled together (without install4j) currently has the following structure where the Java dir contains external jar files (such as derby.jar) required by my application.
myApp.app
Contents
MacOS
Resources
Java
Perhaps I can use a simpler structure but this is what I have for now and it works. Unfortunately, the structure install4j builds does not work (it cannot find my derby.jar) and I cannot figure out how to get install4j to duplicate the app bundle directory structure that I know does work.
Any suggestions?

strong naming for microsoft enterprise library

I am using Microsoft enterprise library in one of my projects. I need to strong name one of the dlls which is Microsoft.Practices.EnterpriseLibrary.Common. But it is not working.
When I decompile using ILDASM, it generates 3 files.
IL file
.RESOURCES file
Common resource script file
How do I compile it with the key file. Which ILASM command should I use?
The dll's are distributed from the original install in a few different modes. One set of files is already signed, so you need to find that set and use the files from that set.
When you install the EntLib package, you get the compiled binaries (some are signed) AND you get the source code, which compiles the source-code and creates the dlls (not signed).
My guess is that you are using the non-signed (compiled from the source code on your local machine) files, instead of the signed ones.

Cross-compiling Makefile: dealing with test programs

I'm trying to cross-compile several libraries from OSX to iOS. I've successfully cross-compiled libjpeg and libogg.
But I can't compile libvorbis because configure insists on creating and running a small test program. This obviously fails, because it creates an armv7 binary, fails to run it, and then interprets this as missing ogg libraries.
How do you usually deal with this kind of problem? I'm tempted to hack the configure script to work around these issues, but because of this kind of failure some features may be disabled. I'm also thinking of letting configure generate a native Makefile and then convert it to use the iOS toolchain, but this seems too error prone.
Any advice?
If you are cross-compiling anything that has more dependencies than libc (glibc) it becomes much more complicated. You need to have already cross-compiled all the dependencies. And the cross-compiler toolchain and all helper build programs and scripts need to know how to find those dependencies (the cross-compiled libraries and headers).
You need to have already cross-compiled libogg (and its dependencies) and installed them into the cross-compile root directory. The headers and libraries from your build system can't be used for the host (arm7) system. They must be kept separate.
Also, if you want to have shared object libraries (*.so) and not just static libraries then there is a whole new set of complications. For example, while a cross-compiler toolchain contains a cross-compiled libc as part of the toolchain, you still need a libc for the host system. The libc that is part of the toolchain can be used for this, but the way it is structured is different than on the host system. Sometimes people copy and re-arrange the files, but often people just compile and install a new glibc for the root.
Anyways, all that to say, the two errors you are seeing are because the configure script is not able to find a cross-compiled libogg library. If you haven't already, you need to cross-compile libogg (and dependencies) and install them into your target root. Then you need to tell the configure script where your cross-compiled headers (yes, header are architecture specific) and libraries are in your target root. Usually using CFLAGS, LDFLAGS, CXXFLAGS, etc (NOT --prefix) but there may be other environment variables you need to set also to affect things like pkg-config, etc. After you have built each dependency, then you need to get the makefile to install the dependency to the root. Usually this is done with make DESTDIR=[root] install but some makefiles have their own mechanism (or no proper alternate install mechanism).
You may also need to override certain configure checks (using environment variables) that are poorly written and don't have good cross-compile defaults. These variables usually start with ac_cv_*
So the basic process is to do this for packages that you need (in dependency order):
export CFLAGS=-I[root]/usr/include LDFLAGS=-L[root]/usr/lib CXXFLAGS=-I[root]/usr/include
export ac_cv_[test1]=[yes|no] ac_cv_[test2]=[yes|no] ...
./configure --host=[arm7-blah-blah]
make
make DESTDIR=[root] install
Good luck. Once you feel comfortable with standard cross-compiling, then you will be ready to take on the real black art, the Canadian cross ;-)
I finally figured it out. I tricked configure by explicitly making it link with ogg (LDFLAGS="/usr/local/ios/lib/libogg-armv7.a" ./configure ...) and then removed the explicit reference to the library from the generated makefile.

netbeans c++ deployment

I had developed a small program in netbeans using c++. I need to know how can i deploy/run the package on another linux system
Abdul Khaliq
I have seen your code, you probably missing XML files in the current folder... where the executable is located... paste then and then run as ./your-executable
I recommend that you use a makefile to recompile on your target machine which will ensure that your program is deployed properly.
You should use a makefile as suggested. I know that NetBeans can generate one, but it's been a while since I last did so. Maybe this can help: http://forums.netbeans.org/topic3071.html
Typically, once compiled, your executable will need several libraries. Chance is that those libraries will also be available on the target linux system.
Thus, you can simply copy your executable over to the other system. If you run ldd on your executable, you should see the list of libraries your executable is dynamically loading. Those libraries should be available on the target system as well.
In case your executable makes use of resources such as images and other binary files, you can use a resource system (e.g. Qt Resource System) and compile those binary files into your executable.
The easiest way to test is to do the copy, run
ldd yourExecutable
on the target system. It will tell you if you are missing any library. Install those libraries using the system package manager.
Of course, you also have the option to statically build all libraries into your executable. However, this is not recommended since it makes the executable too large and complicates matters.
What type of package is your netbeans compiler creating? deb,rpm? If you are moving the package to a different linux install you will need to use that distributions package type. Ubuntu - deb
Fedora/Redhat - rpm
etc...
I'm not sure how you change this in netbeans but I'm pretty sure it has the ability to. A google search could help you more.