I am writing a bitbake recipe to deploy a third party pre-built tool, similar to this wiki page: https://wiki.yoctoproject.org/wiki/TipsAndTricks/Packaging_Prebuilt_Libraries
However, I have a Release and Debug pre-build versions of the tool available as *.so files. How do I distinguish inside the recipe which one of both build types I shall deploy?
Thanks and regards,
Martin
You can have two different virtual recipes each with their own .so file. This then warrants a selection in a configuration file (with PREFERRED_PROVIDER_virtual/my-recipe), so either in a machine or distro configuration file. This is probably preferred if you consider having release and debug distros.
A second option is to install the libraries in two different paths, in two different PACKAGES (use FILES_my-package for that) and make them RCONFLICTS_my-package each other to be sure they can't both be in the rootfs. After that, you could write a pkg_postinst_my-package() task specific to each package that actually move the library from the "different" path to the intended one. This will be run both at build time when creating the rootfs and at runtime on first boot, so you need to make sure to exclude one or the other (it's usually done by checking if ${D} exists, which does at build time but not runtime).
c.f.: http://docs.yoctoproject.org/dev-manual/dev-manual-common-tasks.html#post-installation-scripts
If you can manage to have both libraries installed in your rootfs and select the one you want with the LIBRARY_PATH environment variable, a simple recipe, with two packages with each library in a different location, will be sufficient.
Related
I have a recipe which successfully invokes a legacy build command to cross-compile a target.
As a side effect it produces some custom native tools that are used in the build.
I want to reap those tools into a -tools-native package to allow other recipes to depend the main package to access the artifacts, and use the -tools-native package to further process those artifacts.
I can build such a native package as simply as adding:
PROVIDES = "${PN} ${PN}-tools-native"
SYSROOT_DIRS += "/"
PACKAGES += "${PN}-tools-native"
FILES_${PN}-tools-native += "/native-bin/*"
and having the install section install the native tools to /native-bin/
but yet it somehow isn't a real native package, and when DEPENDS'd by an additional recipe the native-bin artifacts are installed inrecipe-sysrootinstead ofrecipe-sysroot-native`
I also have to install the tools 0644 or bitbake tries to strip them (and fails, as they are native build).
Because the native tools are already generated by the legacy build commands, I don't need to actually invoke as a -native recipe variant.
It's a long process, I don't want to run it twice, either.
Currently I work around it by having the other recipes DEPEND on recipe-native-tools and fixup the permissions and PATH
But what's the proper way to do this?
This is generally handled by separate recipes. There is no mechanism to share native binaries from target recipes as their task hashes have the wrong kinds of information in them (they change depending on the target architecture).
Target recipes don't install their bindir/sbindir into the sysroot since we can't run them and as you mention, they're the wrong architecture so they confuse strip and so on.
You could try having a native recipe which depended upon this target recipe and which installs the binaries saved by the target recipe somewhere into its ${D} at do_install. That may well give some warnings since in general native recipes shouldn't depend on target recipes but is probably your best option if you can't build twice.
I have a simple CMake find module I've written, for a library of mine used by other projects. It's pretty simplistic, with its full text available here. Mainly there's one find_path() and one find_library(), and then some variables are set.
Now, I want CMake, when trying to find my package, to fall back on:
git-cloning or downloading the package/library from its GitHub repository,
Unpacking the archive, if it was a download
Building the package, either be using the running CMake itself somehow (the package has its own CMakeLists.txt), or by running an arbitrary shell command in the directory into which the packages was downloaded/cloned
The specifics of what happens post-download are less important to me than actually having a download fall-back.
How can I / how should I make this happen?
Notes:
Of course if the download/git clone fails, than finding the package has failed.
No need to worry about specific versions at the repo, although you can if you want to.
I had developed a small program in netbeans using c++. I need to know how can i deploy/run the package on another linux system
Abdul Khaliq
I have seen your code, you probably missing XML files in the current folder... where the executable is located... paste then and then run as ./your-executable
I recommend that you use a makefile to recompile on your target machine which will ensure that your program is deployed properly.
You should use a makefile as suggested. I know that NetBeans can generate one, but it's been a while since I last did so. Maybe this can help: http://forums.netbeans.org/topic3071.html
Typically, once compiled, your executable will need several libraries. Chance is that those libraries will also be available on the target linux system.
Thus, you can simply copy your executable over to the other system. If you run ldd on your executable, you should see the list of libraries your executable is dynamically loading. Those libraries should be available on the target system as well.
In case your executable makes use of resources such as images and other binary files, you can use a resource system (e.g. Qt Resource System) and compile those binary files into your executable.
The easiest way to test is to do the copy, run
ldd yourExecutable
on the target system. It will tell you if you are missing any library. Install those libraries using the system package manager.
Of course, you also have the option to statically build all libraries into your executable. However, this is not recommended since it makes the executable too large and complicates matters.
What type of package is your netbeans compiler creating? deb,rpm? If you are moving the package to a different linux install you will need to use that distributions package type. Ubuntu - deb
Fedora/Redhat - rpm
etc...
I'm not sure how you change this in netbeans but I'm pretty sure it has the ability to. A google search could help you more.
I have a Django site placed in folder site/. It's under version control. I use South for schema and data migrations for my applications. Site-specific applications are under folder site/ so they are all version-controlled along with their migrations.
I manage a virtualenv to keep third party components dry and safe. I install packages via PyPI. The installed packages' list are frozen in requirements.txt so the they can be easily installed in another environment. The virtualenv is not under VCS. I think it is a good way if virtualenv can be easly deleted and reconstructed at any time. If I need to test my site, for instance, using another version of Python interpreter, simply activate another virtulalenv.
I'd like to use South for third party packages, though. Here comes the problem. Migration scripts stored in the application's folder so they are outside of my site's repository. But I want migration scripts to be under version control so I can run them on different stages as well.
I don't want to version control the whole virtualenv but the migration scripts for third party applications. How can I resolve this conflict? Is there any misconcept in my scenario?
The SOUTH_MIGRATION_MODULES setting allows you to put migration modules for specified apps wherever you want them (i.e. inside your project tree).
I think it depends a litte bit on your version control system. I recommend to use a sparse tree, one that only manages the migration folders of the various packages. Here I see two alternatives:
Make a truly sparse tree for all packages, one that you check out before creating the virtualenv. Then populate the virtualenv, putting stuff into the existing folders.
Collect all migrations into a separate repository, with a folder per project/external dependency. Check this out into the virtualenv, and create symlinks, linking from each project to its migration folder.
In either case, I believe you can arrange for the migrations to exist as a separate project, so you can install it with the same process as you install everything else (easy_install/pip/whatever).
I posted this question looking for something similar to Buildout for Perl. I think Shipwright is what I'm looking for but I'm not really sure. I've played around with it and I created a project, imported all of my source and dependencies and I've exported everything to a vessel then the documentation sort of just stopped. What do I do with a shipyard vessel? Do I do my actual development work in the vessel, or do I do my development in the Shipyard? I'm assuming that the vessel is only for deployment, but how do I actually deploy a vessel to a web server (say I'm using linux, apache and just running straight cgi).
Is Shipwright the right thing for what I'm trying to accomplish or is there something else that would be more appropriate? Ideally I could use Shipwright similar to how I use Buildout. I use Buildout to create a nice isolated environment for my development, and also I use Buildout when deploying to live servers to manage all of my application's dependencies.
EDIT: Here are the highlights of what I can do with Buildout that I would like to be able to do in Perl.
With Buildout, I have a file in my codebase that lists dependencies (which for Perl would either be CPAN modules or other source repositories). I can run a bootstrap script that will fetch all of those dependencies and drop them into a directory within my project and NOT install them at a system level. Buildout also creates utility scripts which can do anything you want (run tests, other command line tools, anything really) and those scripts explicitly add the dependencies to the path so that as my scripts are running all of my dependencies are available to be imported.
What this really does very well is that it allows me to manage my dependencies without having to ever install anything at a system level. Which makes changing from one version to another very easy. Also, it allows me to have multiple Buildout projects running on the same system using different versions of the same module. Finally, one huge benefit is that with Buildout's directory structure, I can just commit the dependencies to source control and to deploy to a new machine I just need to do a checkout and all of my dependencies are already satisfied without having to touch anything installed at a system level.
I don't think you'll find anything exactly like Buildout in Perl, but you could put together a couple of things that would do the trick.
You could use a standard Build.PL script for Module::Build for managing your dependencies and having commands to run tests, etc.
Then you could use cpanminus to do the installation of those dependencies into a local (non-system) directory.
Then you might be able to use Shipwright to do the bundling and deployment of the project with these now-local dependencies.