Is there any way to define own options in *.pc files except standard options - configuration-files

I am trying to make a .pc file and want to define a flag ldflags and use it something like this
pkg-config package --ldflags
but it's showing unknown option ldflags.
Is there a way to define my own option in pc file.

.pc file has special format, check here:
The file format contains predefined metadata keywords and freeform
variables.
keywords are fixed, e.g. Cflags and most of them correspond to pkg-config tool options e.g. --cflags. But variables:
... can be used by the keyword definitions for flexibility or to store
data not covered by pkg-config
So, it possible to add own, for example I created minimum possible one (Name, Version, Description are mandatory keywords):
$ cat test.pc
Name: test
Version: 1
Description: test
aaa=bbb
$ pkg-config --variable=aaa
bbb
It's possible to list all with --print-variables option (interestingly though that pcfiledir variable is added automatically):
$ pkg-config --print-variables test.pc
aaa
pcfiledir
$ pkg-config --variable=pcfiledir
.
It can even (re)define one variable using another:
$ cat test.pc
Name: test
Version: 1
Description: test
aaa=bbb${ccc}
$ pkg-config test.pc --variable=aaa --define-variable=ccc=xxx
bbbxxx
Wonder about your use case? since contents of this file is just for keeping metadata about dependencies some package has. Maybe you should add instead these flags to Libs or Libs.private keywords?

Related

How to use plugin commands in bcftools?

My goal is to use bcftools to check that the reference alleles in my dataset (vcf file) match with a reference genome (fasta file) using the fixref plugin.
Working on command line, I first set the following environment:
export BCFTOOLS_PLUGINS=/path/to/bcftools/plugins
The following code is recommended for test datasets with mismatches:
bcftools +fixref test.bcf -Ob -o output.bcf -- -f ref.fa -m top
When I run this code using my own files (please note that my data is .vcf, not .bcf) I get the following error:
[main] Unrecognized command
If I simply enter:
bcftools
I get a list of the only 5 commands (view, index, cat, ld, ldpair) that I can use. So although I've set the environment, does it somehow need to be activated? Do I need to run my command through a bash script?
bcftools
was pointing to a deprecated version of bcftools (0.1.19) in ../bin/, while
BCFTOOLS_PLUGINS=/path/to/bcftools/plugins
was pointing to the plugins for bcftools version 1.10.2 outside /bin/
Replacing ../bin/bcftools (0.1.19 with 1.10.2) was the fix.

How can I get "HelloWorld - BitBake Style" working on a newer version of Yocto?

In the book "Embedded Linux Systems with the Yocto Project", Chapter 4 contains a sample called "HelloWorld - BitBake style". I encountered a bunch of problems trying to get the old example working against the "Sumo" release 2.5.
If you're like me, the first error you encountered following the book's instructions was that you copied across bitbake.conf and got:
ERROR: ParseError at /tmp/bbhello/conf/bitbake.conf:749: Could not include required file conf/abi_version.conf
And after copying over abi_version.conf as well, you kept finding more and more cross-connected files that needed to be moved, and then some relative-path errors after that... Is there a better way?
Here's a series of steps which can allow you to bitbake nano based on the book's instructions.
Unless otherwise specified, these samples and instructions are all based on the online copy of the book's code-samples. While convenient for copy-pasting, the online resource is not totally consistent with the printed copy, and contains at least one extra bug.
Initial workspace setup
This guide assumes that you're working with Yocto release 2.5 ("sumo"), installed into /tmp/poky, and that the build environment will go into /tmp/bbhello. If you don't the Poky tools+libraries already, the easiest way is to clone it with:
$ git clone -b sumo git://git.yoctoproject.org/poky.git /tmp/poky
Then you can initialize the workspace with:
$ source /tmp/poky/oe-init-build-env /tmp/bbhello/
If you start a new terminal window, you'll need to repeat the previous command which will get get your shell environment set up again, but it should not replace any of the files created inside the workspace from the first time.
Wiring up the defaults
The oe-init-build-env script should have just created these files for you:
bbhello/conf/local.conf
bbhello/conf/templateconf.cfg
bbhello/conf/bblayers.conf
Keep these, they supersede some of the book-instructions, meaning that you should not create or have the files:
bbhello/classes/base.bbclass
bbhello/conf/bitbake.conf
Similarly, do not overwrite bbhello/conf/bblayers.conf with the book's sample. Instead, edit it to add a single line pointing to your own meta-hello folder, ex:
BBLAYERS ?= " \
${TOPDIR}/meta-hello \
/tmp/poky/meta \
/tmp/poky/meta-poky \
/tmp/poky/meta-yocto-bsp \
"
Creating the layer and recipe
Go ahead and create the following files from the book-samples:
meta-hello/conf/layer.conf
meta-hello/recipes-editor/nano/nano.bb
We'll edit these files gradually as we hit errors.
Can't find recipe error
The error:
ERROR: BBFILE_PATTERN_hello not defined
It is caused by the book-website's bbhello/meta-hello/conf/layer.conf being internally inconsistent. It uses the collection-name "hello" but on the next two lines uses _test suffixes. Just change them to _hello to match:
# Set layer search pattern and priority
BBFILE_COLLECTIONS += "hello"
BBFILE_PATTERN_hello := "^${LAYERDIR}/"
BBFILE_PRIORITY_hello = "5"
Interestingly, this error is not present in the printed copy of the book.
No license error
The error:
ERROR: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb: This recipe does not have the LICENSE field set (nano)
ERROR: Failed to parse recipe: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
Can be fixed by adding a license setting with one of the values that bitbake recognizes. In this case, add a line onto nano.bb of:
LICENSE="GPLv3"
Recipe parse error
ERROR: ExpansionError during parsing /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
[...]
bb.data_smart.ExpansionError: Failure expanding variable PV_MAJOR, expression was ${#bb.data.getVar('PV',d,1).split('.')[0]} which triggered exception AttributeError: module 'bb.data' has no attribute 'getVar'
This is fixed by updating the special python commands being used in the recipe, because #bb.data was deprecated and is now removed. Instead, replace it with #d, ex:
PV_MAJOR = "${#d.getVar('PV',d,1).split('.')[0]}"
PV_MINOR = "${#d.getVar('PV',d,1).split('.')[1]}"
License checksum failure
ERROR: nano-2.2.6-r0 do_populate_lic: QA Issue: nano: Recipe file fetches files and does not have license file information (LIC_FILES_CHKSUM) [license-checksum]
This can be fixed by adding a directive to the recipe telling it what license-info-containing file to grab, and what checksum we expect it to have.
We can follow the way the recipe generates the SRC_URI, and modify it slightly to point at the COPYING file in the same web-directory. Add this line to nano.bb:
LIC_FILES_CHKSUM = "${SITE}/v${PV_MAJOR}.${PV_MINOR}/COPYING;md5=f27defe1e96c2e1ecd4e0c9be8967949"
The MD5 checksum in this case came from manually downloading and inspecting the matching file.
Done!
Now bitbake nano ought to work, and when it is complete you should see it built nano:
/tmp/bbhello $ find ./tmp/deploy/ -name "*nano*.rpm*"
./tmp/deploy/rpm/i586/nano-dbg-2.2.6-r0.i586.rpm
./tmp/deploy/rpm/i586/nano-dev-2.2.6-r0.i586.rpm
I have recently worked on that hands-on hello world project. As far as I am concerned, I think that the source code in the book contains some bugs. Below there is a list of suggested fixes:
Inheriting native class
In fact, when you build with bitbake that you got from poky, it builds only for the target, unless you mention in your recipe that you are building for the host machine (native). You can do the latter by adding this line at the end of your recipe:
inherit native
Adding license information
It is worth mentioning that the variable LICENSE is important to be set in any recipe, otherwise bitbake rises an error. In our case, we try to build the version 2.2.6 of the nano editor, its current license is GPLv3, hence it should be mentioned as follow:
LICENSE = "GPLv3"
Using os.system calls
As the book states, you cannot dereference metadata directly from a python function. Which means it is mandatory to access metadata through the d dictionary. Bellow, there is a suggestion for the do_unpack python function, you can use its concept to code the next tasks (do_configure, do_compile):
python do_unpack() {
workdir = d.getVar("WORKDIR", True)
dl_dir = d.getVar("DL_DIR", True)
p = d.getVar("P", True)
tarball_name = os.path.join(dl_dir, p+".tar.gz")
bb.plain("Unpacking tarball")
os.system("tar -x -C " + workdir + " -f " + tarball_name)
bb.plain("tarball unpacked successfully")
}
Launching the nano editor
After successfully building your nano editor package, you can find your nano executable in the following directory in case you are using Ubuntu (arch x86_64):
./tmp/work/x86_64-linux/nano/2.2.6-r0/src/nano
Should you have any comments or questions, Don't hesitate !

Possible to change the package name when generating client code

I am generating the client scala code for an API using the Swagger Edtior. I pasted the json then did a Generate Client/Scala. It gives me a default root package of
io.swagger.client
I can't see any obvious way of specifying something different. Can this be done?
Step (1): Create a file config.json and add following lines and define package names:
{
"modelPackage" : "com.xyz.model",
"apiPackage" : "com.xyz.api"
}
Step (2): Now, pass the above file name along with codegen command with -c option:
$ java -jar swagger-codegen-cli.jar generate -i path/swagger.json -l java -o Code -c path/config.json
Now, it will generate your java packages like com.xyz… instead of default one io.swagger.client…
Run the following command to get information about the supported configuration options
java -jar swagger-codegen-cli.jar config-help -l scala
This will give you information about supported by this generator (Scala in this example):
CONFIG OPTIONS
sortParamsByRequiredFlag
Sort method arguments to place required parameters before optional parameters. (Default: true)
ensureUniqueParams
Whether to ensure parameter names are unique in an operation (rename parameters that are not). (Default: true)
modelPackage
package for generated models
apiPackage
package for generated api classes
Next, define a config.json file with the above parameters:
{
"modelPackage": "your package name",
"apiPackage": "your package name"
}
And supply config.json as input to swagger-codegen using the -c flag.

How can I change the installation path of an autotools-based Bitbake recipe?

I have an autotools-based BitBake recipe which I would like to have binaries installed in /usr/local/bin and libraries installed in /usr/local/lib (instead of /usr/bin and /usr/lib, which are the default target directories).
Here's a part of the autotools.bbclass file which I found important.
CONFIGUREOPTS = " --build=${BUILD_SYS} \
--host=${HOST_SYS} \
--target=${TARGET_SYS} \
--prefix=${prefix} \
--exec_prefix=${exec_prefix} \
--bindir=${bindir} \
--sbindir=${sbindir} \
--libexecdir=${libexecdir} \
--datadir=${datadir} \
--sysconfdir=${sysconfdir} \
--sharedstatedir=${sharedstatedir} \
--localstatedir=${localstatedir} \
--libdir=${libdir} \
...
I thought that the easiest way to accomplish what I wanted to do would be to simply change ${bindir} and ${libdir}, or perhaps change ${prefix} to /usr/local, but I haven't had any success in this area. Is there a way to change these installation variables, or am I thinking about this in the wrong way?
Update:
Strategy 1
As per Ross Burton's suggestion, I've tried adding the following to my recipe:
prefix="/usr/local"
exec_prefix="/usr/local"
but this causes the build to fail during that recipe's do_configure() task, and returns the following:
| checking for GLIB... no
| configure: error: Package requirements (glib-2.0 >= 2.12.3) were not met:
|
| No package 'glib-2.0' found
This package can be found during a normal build without these modified variables. I thought that adding the following line might allow the system to find the package metadata for glib:
PKG_CONFIG_PATH = " ${STAGING_DIR_HOST}/usr/lib/pkgconfig "
but this seems to have made no difference.
Strategy 2
I've also tried Ross Burton's other suggestion to add these variable assignments into my distribution's configuration file, but this causes it to fail during meta/recipes-extended/tzdata's do_install() task. It returns that DEFAULT_TIMEZONE is set to an invalid value. Here's the source of the error from tzdata_2015g.bb
# Install default timezone
if [ -e ${D}${datadir}/zoneinfo/${DEFAULT_TIMEZONE} ]; then
install -d ${D}${sysconfdir}
echo ${DEFAULT_TIMEZONE} > ${D}${sysconfdir}/timezone
ln -s ${datadir}/zoneinfo/${DEFAULT_TIMEZONE} ${D}${sysconfdir}/localtime
else
bberror "DEFAULT_TIMEZONE is set to an invalid value."
exit 1
fi
I'm assuming that I've got a problem with ${datadir}, which references ${prefix}.
Do you want to change paths for everything or just one recipe? Not sure why you'd want to change just one recipe to /usr/local, but whatever.
If you want to change all of them, then the simple way is to set prefix in your local.conf or distro configuration (prefix = "/usr/local").
If you want to do it in a particular recipe, then just assigning prefix="/usr/local" and exec_prefix="/usr/local" in the recipe will work.
These variables are defined in meta/conf/bitbake.conf, where you can see that bindir is $exec_prefix/bin, which is probably why assigning prefix didn't work for you.
Your first strategy was on the right track, but you were clobbering more than you wanted by changing only "prefix". If you look in sources/poky/meta/conf/bitbake.conf you'll find everything you are clobbering when you set the variable "prefix" to something other than "/usr" (like it was in my case). In order to modify only the install path with what would manually be the "--prefix" option to configure, I needed to set all the variables listed here in that recipe:
prefix="/your/install/path/here"
datadir="/usr/share"
sharedstatedir="/usr/com"
exec_prefix="/usr"

Is there a way to tell django compressor to create source maps

I want to be able to debug minified compressed javascript code on my production site. Our site uses django compressor to create minified and compressed js files. I read recently about chrome being able to use source maps to help debug such javascript. However I don't know how/if possible to tell the django compressor to create source maps when compressing the js files
I don't have a good answer regarding outputting separate source map files, however I was able to get inline working.
Prior to adding source maps my settings.py file used the following precompilers
COMPRESS_PRECOMPILERS = (
('text/coffeescript', 'coffee --compile --stdio'),
('text/less', 'lessc {infile} {outfile}'),
('text/x-sass', 'sass {infile} {outfile}'),
('text/x-scss', 'sass --scss {infile} {outfile}'),
('text/stylus', 'stylus < {infile} > {outfile}'),
)
After a quick
$ lessc --help
You find out you can put the less and map files in to the output css file. So my new text/less precompiler entry looks like
('text/less', 'lessc --source-map-less-inline --source-map-map-inline {infile} {outfile}'),
Hope this helps.
Edit: Forgot to add, lessc >= 1.5.0 required for this, to upgrade use
$ [sudo] npm update -g less
While I couldn't get this to work with django-compressor (though it should be possible, I think I just had issues getting the app set up correctly), I was able to get it working with django-assets.
You'll need to add the appropriate command-line argument to the less filter source code as follows:
diff --git a/src/webassets/filter/less.py b/src/webassets/filter/less.py
index eb40658..a75f191 100644
--- a/src/webassets/filter/less.py
+++ b/src/webassets/filter/less.py
## -80,4 +80,4 ## class Less(ExternalTool):
def input(self, in_, out, source_path, **kw):
# Set working directory to the source file so that includes are found
with working_directory(filename=source_path):
- self.subprocess([self.less or 'lessc', '-'], out, in_)
+ self.subprocess([self.less or 'lessc', '--line-numbers=mediaquery', '-'], out, in_)
Aside from that tiny addition:
make sure you've got the node -- not the ruby gem -- less compiler (>=1.3.2 IIRC) available in your path.
turn on the sass source-maps option buried away in chrome's web inspector config pages. (yes, 'sass' not less: less tweaked their debug-info format to match sass's since since sass had already implemented a chrome-compatible mapping and their formats weren't that different to begin with anyway...)
Not out of the box but you can extend a custom filter:
from compressor.filters import CompilerFilter
class UglifyJSFilter(CompilerFilter):
command = "uglifyjs -c -m " /
"--source-map-root={relroot}/ " /
"--source-map-url={name}.map.js" /
"--source-map={relpath}/{name}.map.js -o {output}"