How to (force Yocto to) create ${PN}.deb file? - yocto

I'm trying to add a library (a cmake project) to my Yocto project/image.
The package essentially consists of one static library (named hello.a) with some header files in C.
I wrote a recipe and could configure, compile, package it.
The packaging results are four files {hello-dbg, hello-dev, hello-src, hello-staticdev}.deb
So there is no hello.deb.
And that seems to be a problem preventing me to create image.
The following packages have unmet dependencies:
packagegroup-utils-extra : Depends: hello but it is not installable
E: Unable to correct problems, you have held broken packages.
When I try to add that by defining:
FILES_${PN} += "/usr/lib/hello.a"
bitbake does not allow adding static libraries to anything but staticdev -> so that does not work.
My question is then, as the title says, how to (force Yocto to) create ${PN}.deb file?

The empty packages (i.e. containing no files) are not created by default. If you want to override it, you can do it via the ALLOW_EMPTY variable for a package like this:
ALLOW_EMPTY:${PN} = "1"
You can also check the official documentation for ALLOW_EMPTY.
Just for clarification:
You can install the ${PN} package (it won't install any file on the target system).
As before, your static library will still be shipped in the ${PN}-staticdev package.

Related

How to install tar.gz package to Yocto by adding new layer?

I new to Yocto so there are probably some mistakes and misunderstanding that I've had, I appreciate if you can help.
So, I want to add a new package (tar file) to my custom image.
I have followed steps and steps in manual and some online instructions. While running: "bitbake mylayer", the layer is built fine but I got this error while building the image, here is the log file:
DEBUG: Executing python function rootfs_deb_bad_recommendations
DEBUG: Python function rootfs_deb_bad_recommendations finished
DEBUG: Executing python function extend_recipe_sysroot
NOTE: Installed into sysroot: []
NOTE: Skipping as already exists in sysroot: ['depmodwrapper-cross', 'apt-native', 'dpkg-native', 'pseudo-native', 'update-rc.d-native', 'prelink-native', 'makedevs-native', 'ldconfig-native', 'opkg-util$
DEBUG: Python function extend_recipe_sysroot finished
DEBUG: Executing python function do_rootfs
NOTE: ###### Generate rootfs #######
NOTE: Installing the following packages: apt busybox copy-uefiimg-to-sda coreutils dpkg e2fsprogs-resize2fs libfontconfig1 libfreetype6 libglib-2.0-0 gptfdisk libjemalloc2 kernel-module-axi-dma-sensor ku$
ERROR: Unable to install packages.
Reading package lists...
Building dependency tree...
Reading state information...
Package mypackage is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'mylayer' has no installation candidate
DEBUG: Python function do_rootfs finished
ERROR: Function failed: do_rootfs
And here is mylayer.bb:
SUMMARY = ""
LICENSE = "CLOSE"
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
SRC_URI += "file://mypackage.tar"
Also, I have included the package in conf/local.conf:
IMAGE_INSTALL_append += " mylayer"
So beside trying to figure out how to solve this problem, I also have some questions:
I have read some example of .bb, and they mentioned about LIC_FILES_CHKSUM. The mypackage.tar.gz is a package to install a platform for the device and I don't know much about the source code, so I don't know if it is necessary to include the license? Or how to know that this package need license to install?
In some answer I found online, there is one saying that I need to include PACKAGE_CLASSES ?= "package_deb" (they want to install the .deb file), so probably in my case I will need PACKAGE_CLASSES ?= "package.tar" right? I have tried to change variable, but it still not successful.
The mypackage.tar includes some deb files. If I could not install mypackage.tar, can I instead install these .deb files? Can I put it all in mylayer.bb?
Thank you in advanced, I have tried to study much documents as I could but I get so confused and there is huge amount of information to digest.
First, before answering your questions
Let me mention some best practices advice for you:
Rename the recipe to some significant name related to you compressed package.
Naming the recipe to mylayer confuses Yocto users, because there is the term layer also.
Regarding you recipe:
There is no need for FILESEXTRAPATHS because the recipe path is added automatically to Yocto paths.
FILESEXTRAPATHS it is required for .bbappend files.
You need to override the do_install task function, it does nothing by default.
do_install is the first essential task to make sure that your sources are included in the final image.
Beside that, when specifying a compressed source file into SRC_URI, yocto automatically decompresses it.
This is mentioned here.
So, here what your recipe should look like:
SUMMARY = ""
LICENSE = "CLOSED"
# Prevent Yocto from decompressing the file
SRC_URI = "file://mypackage.tar;unpack=0"
do_install(){
# Create the opt folder into the final image, ${D} is ${WORKDIR}/image
install -d ${D}/opt
# Copy the compressed file there; You can change permissions as you want
install -m 0755 ${WORKDIR}/mypackage.tar ${D}/opt
}
# Very important to specify what you installed in (do_install)
FILES_${PN} = "/opt/*"
Now, when you run IMAGE_INSTALL_append += " mylayer" your file will be installed.
Regarding your questions:
You mentioned that your compressed file contains .deb files, I assume that no license checksum is needed. Also, I understand that you may wanted to point to SRC_URI[md5sum] or other checksums for the full package. That is also not needed for local files, it is used to check for the integrity of online sources.
PACKAGE_CLASSES as mentioned here, is used by the system to know in what type the data should be packaged. By the data I mean the data that you installed with do_install. That data get packaged for according to your PACKAGE_CLASSES variable, for example, to deb file. And that is used, along side with all other recipes packages, to build the final rootfs.
Yes, if you are installing the tar file into the image and then unpack it to install all deb files, for example, with dpkg. You can use the bin_package class to do that, now the recipe must be changed for that reason:
Decompress the tar file and provide the deb files in the local files folder.
Add all deb files to SRC_URI
Inherit the bin_package class
Specify the files to be packaged.
Your recipe should look like this:
SUMMARY = ""
LICENSE = "CLOSED"
SRC_URI = "file://deb_file1.deb \
file://deb_file2.deb"
# No need to `do_install` , it is invoked by the (bin_package) class
FILES_${PN} = ""
Important:
About FILES_${PN}, you need to add all what the deb installed into the image folder
Example, if your deb file installs this:
/usr/bin/hello
/etc/hello.cfg
Specify them:
FILES_${PN} = "/usr/bin/*"
FILES_${PN} += "/etc/*"
Use * so if other deb files install files into the same folder as others it will include all.

Does a recipe provided as DEPENDS can have own do_install() in yocto?

I am trying to build a custom recipe with Yocto (Rocko) for my Linux i.MX6 based embedded system.
The main recipe had dependency on a other custom recipe(as the main recipe is using header-files from this recipe) which is also creating some binaries which needs to be included in the final image.
I have added other_recipe(nbdkit) in the "DEPENDS" of main_recipe.bb
DEPENDS += "nbdkit"
the main recipe is creating a .so with the help of its own source file which is including header-files from the 'kit' recipe. I am able to install the binaries & .so generated using this main_recipe by adding it in do_install().
Now in the other_recipe(nbdkit http://cgit.openembedded.org/meta-openembedded/tree/meta-networking/recipes-support/nbdkit/nbdkit_git.bb?h=master), When I add do_install()to include the binaries generated from that recipe the main_recipe build gets failed with PKG_CONFIG error as follow,
| Package nbdkit was not found in the pkg-config search path.
| Perhaps you should add the directory containing `nbdkit.pc'
| to the PKG_CONFIG_PATH environment variable
| No package 'nbdkit' found
Other build errors says that the header-files of kit included in the main_recipe is not found.
app-nbdkit-plugin.c:2:10: fatal error: nbdkit-plugin.h: No such file or directory
Where app-nbdkit-plugin.c is source file of main_recipe & kit-plugin.h is header file of other_recipe.
The strange thing is, when I remove do_install() from other_recipe(nbdkit) the main_recipe is getting built successfully.
Now I doubt, Is it possible to set a recipe as DEPENDS of other recipe and at the same time it provides output file as do_install()?
Will sharing header-files from other_recipe to main_recipe resolves the issue? If yes, how to do that?
Thanks.
[EDIT] Added nbdkit recipe link.

Error saying "Module Not Found" when adding SPM which uses other SPMs as dependencies within itself

I have been creating a Swift Package Manager. It uses 2 other SPMs within itself. SPM compiles fine when compiled independently. As soon as the project is imported into an Xcode project I get a compiler error saying that:
No such module 'ModuleName'
Note: The ModuleName in the above error corresponds to the package imported within the package that is being imported to my project.
I have been stuck on this for a pretty while now and have tried the following:
Removed and readded the SPMs to dependencies to my SPM, and then tried importing my SPM to my project (I did this before and after each of the other steps too).
Checked to see where these packages where being added as dependencies. It shows up in the SPM main target Module -> Build Phases -> Link binary with libraries. I additionally added it to the Dependencies section to see if it changes anything.
Tried adding SPMs to ModulePackageDescription target to Dependencies section.
Added the dependencies in the Package.swift file as follows.
dependencies: [
// Dependencies declare other packages that this package depends on.
.package(url: "package1_url", .branch("master")),
.package(url: "package2_url", .branch("master"))
]
Adding this would import the other dependencies to my Xcode project. I don't exactly want this to happen because in case I try to use another version of the SPM that is being imported within my SPM, it would cause conflict between the two versions. But I'm willing to do this if it's the right way to go. But even adding dependencies in Package.swift didn't work for me. How would I resolve this issue? Let me know if anyone has faced the same issue.
Are the libraries public classes also need to contain constructors?
public struct NumbersA {
public init () {
}
}
also add them to the dependency Package.swft->dependencies: ["NumbersA"]),

Yocto Image build fails because "nothing RPROVIDES libavresample"

I am trying to build a custom Yocto image based on fsl-image-gui for my iMX6 based board SECO A62J. I use Hob to do this.
After having selected my machine, my layers and my image, I custom my packages list by adding chromium. This automatically selects libexif and libav which are Chromium dependencies. The build of the packages is successful
The last step is the build of the image itself, and this is where my problem appears. I select the packages I want to include in my image, including Chromium, libexif and libav (and its dependencies).
And I got those errors :
Nothing RPROVIDES 'libavresample' (but
/home/adrien/fsl-release-bsp/build_anna/recipes/images/fsl-image-gui-edited-20170131-144607.bb RDEPENDS on or otherwise requires it)
and
Required build target 'fsl-image-gui-edited-20170131-144607' has no
buildable providers. Missing or unbuildable dependency chain was:
['fsl-image-gui-edited-20170131-144607', 'libavresample']
However, the library libavresample.so is built successfully and can be found under my build directory in sysroots/"machine_name"/usr/lib/
Why Yocto can't find and include this library in my image, What am I missing here?
In your local.conf :
LICENSE_FLAGS_WHITELIST += " commercial"

Installing Cairo, Helm on Windows

How do I install Helm (https://hackage.haskell.org/package/helm) on Windows 7 (64-bit)?
(Update: I had posted a lot of error messages here, but I've moved them to my answer to not clutter up the question.)
Installation for Windows 64-bit:
I'm including error messages, for if you follow all the steps up to that point and then just try to install directly. This is a conglomeration of a bunch of ad-hoc steps from following many different posts. Any simplification would be appreciated!
Note: Do all work in directories without spaces. I'm doing all work in C:/PF; modify this to your directory.
Download MSYS2-x86_64 from https://msys2.github.io/ and install it. Cabal install cairo (or helm) will give something like:
Configuring cairo-0.13.1.0...
setup.exe: Missing dependencies on foreign libraries:
Missing C libraries: z, cairo, z, gobject-2.0, ffi, pixman-1, fontconfig,
expat, freetype, iconv, expat, freetype, z, bz2, harfbuzz, glib-2.0, intl,
ws2_32, ole32, winmm, shlwapi, intl, png16, z
Download C libraries. In MINGW64 (NOT MSYS2 - I had trouble with MSYS2 at random stages in the process), use the package manager:
pacman -Ss cairo
to search for the Cairo package. You'll find "mingw64/mingw-w64-x86_64-cairo", so install that:
pacman -S mingw64/mingw-w64-x86_64-cairo
*.pc files should have been added to C:\PF\msys64\mingw64\lib\pkgconfig and C:\PF\msys64\usr\lib\pkgconfig. (pkg-config needs to be able to find these files. It looks in PKG_CONFIG_PATH, which by default should have the lib/pkgconfig folder above. Moving the file here is easiest. See Can't install sdl2 via cabal) If you get
The pkg-config package ... version ... cannot be found
errors then check your *.pc files.
Repeat with other required libraries, like atk
pacman -S mingw64/mingw-w64-x86_64-atk
(I don't know the complete list, but error messages later on will let you know what to get.)
Get the development files for these libraries (as suggested by How to install cairo on Windows). Most of them are bundled up at http://ftp.gnome.org/pub/gnome/binaries/win64/gtk+/2.22/. Unzip.
Copy files (.a, .dll.a) in lib to C:\PF\msys64\mingw64\lib. Copy the pkgconfig folder, which contains the .pc files.
Copy files in include to C:\PF\msys64\mingw64\include.
Add C:\PF\gtk+-2.22.1\bin to the path.
(2) and (3) might be redundant. I don't know - I did them both.
At this point you can probably do "cabal install cairo". (Warning: if your end goal is something else, you may not want to "cabal install" intermediate packages, see https://wiki.haskell.org/Cabal/Survival#Issue_.232_--_Not_installing_all_the_packages_in_one_go.)
See (4) for the syntax in specifying extra-include-dirs and extra-lib-dirs (but if you copied the files above this shouldn't be necessary),
Any time you get
Missing (or bad) header file
check to see you copied the *.h files to mingw64\include and/or add the include folder to the PATH. Use cabal install -v3 to get verbose error messages if the problem persists.
If you get something like
cairo-0.13.1.0: include-dirs: /mingw64/include/freetype2 is a relative path
which makes no sense (as there is nothing for it to be relative to). You can
make paths relative to the package database itself by using ${pkgroot}. (use
--force to override)
try --ghc-pkg-options="--force" (as mentioned at https://github.com/gtk2hs/gtk2hs/issues/139).
Get SDL. Otherwise you'll get
configure: error: *** SDL not found! Get SDL from www.libsdl.org.
If you already installed it, check it's in the path. If problem remains,
please send a mail to the address that appears in ./configure --version
indicating your platform, the version of configure script and the problem.
Failed to install SDL-0.6.5.1
Follow the instructions in (2) to get sdl/sdl2 libraries. (See instructions here Installing SDL on Windows for Haskell (GHC).)
The new version helm-0.7.1 requires sdl2, but there are other dependency issues with helm-0.7.1 as of writing. Download SDL from http://sourceforge.net/projects/msys2/files/REPOS/MINGW/x86_64/ (direct download link to newest version as of writing http://sourceforge.net/projects/msys2/files/REPOS/MINGW/x86_64/mingw-w64-x86_64-SDL-1.2.15-7-any.pkg.tar.xz.sig/download), unzip. "cabal install sdl" gives
* Missing (or bad) header file: SDL/SDL.h
* Missing C library: SDL
This problem can usually be solved by installing the system package that
provides this library (you may need the "-dev" version). If the library is
already installed but in a non-standard location then you can use the flags
--extra-include-dirs= and --extra-lib-dirs= to specify where it is.
so we specify where the dirs are (change the name depending on where you extracted sdl to)
cabal install sdl --extra-include-dirs=C:/PF/sdl\include --extra-lib-dirs=C:/sdl/lib
If you got SDL2 (http://libsdl.org/download-2.0.php) (for a newer version of Helm): there is a fatal bug that hasn't been fixed in the release version. (If you don't fix it, cabal install -v3 things which depends on it will give error
winapifamily.h: No such file or directory
("winapifamily.h: No such file or directory" when compiling SDL in Code::Blocks) Download https://hg.libsdl.org/SDL/raw-file/e217ed463f25/include/SDL_platform.h, replace the file in the include folder and in C:/PF/msys64/mingw64/include/SDL2.
Download gtk2hs from http://code.haskell.org/gtk2hs and run
the following
cd gtk2hs/tools
cabal install
cd ../glib
cabal install
cd ../gio
cabal install
cd ../pango
cabal install --ghc-pkg-options="--force"
(Maybe you have already installed glib and gio from before? I did this step because normal install of Pango caused an error for me (https://github.com/gtk2hs/gtk2hs/issues/110)
pango-0.13.1.0: include-dirs: /mingw64/include/freetype2 is a relative path
which makes no sense (as there is nothing for it to be relative to). You can
make paths relative to the package database itself by using ${pkgroot}. (use
--force to override)
Once the Helm developers get things updated you should be able to do "cabal install helm" but right now there seem to be dependency issues. For me, cabal automatically tries to install helm-0.4 (probably because 0.4 didn't give upper bounds on dependencies, while newer versions do. You could try "cabal unpack"ing and deleting the upper bounds...). Then
cabal unpack helm-0.4
Installing gives an error because "pure" got moved to Prelude. Open helm-0.4\src\FRP\Helm\Automaton.hs and change line 17:
import Prelude hiding (id, (.), pure)
Now
cabal install
Try to compile and run a program using Helm
(This is 0.4 - look on the website for a newer sample if you tried a newer Helm)
import FRP.Helm
import qualified FRP.Helm.Window as Window
render :: (Int, Int) -> Element
render (w, h) = collage w h [filled red $ rect (fromIntegral w) (fromIntegral h)]
main :: IO ()
main = run $ fmap (fmap render) Window.dimensions
If you get an error about a missing .dll (sdl.dll), find it in a bin/ folder and add the folder to your PATH (or copy it to somewhere on your path).