Check if a ccache call was a cache hit - ccache

As part of my build process, I'd like to get statistics on the build time and whether ccache found the item in the cache. I know about ccache -s where I can compare the previous and current cache hit counts.
However, if I have hundreds of compilation threads running in parallel, the statistics don't tell me which file caused the hit.
The return code of ccache is that of the compiler. Is there any way I can get ccache to tell me if it was successful?

There are two options:
Enable the ccache log file: Set log_file in the configuration (or the environment variable CCACHE_LOGFILE) to a file path. Then you can figure out the result of each compilation from the log data. It can be a bit tedious if there are many parallel ccache invocations (the log file is shared between all of them, so log records from the different processes will be interleaved) but possible by taking the PID part of each log line into account.
In ccache 3.5 and newer, it's better to enable the debug mode: Set debug = true in the configuration (or the environment variable CCACHE_DEBUG=1). ccache will then store the log for each produced object file in <objectfile>.ccache-log. Read more in Cache debugging in the ccache manual.

I wrote a quick-n-dirty script that tells me which files had to be rebuild and what the cache miss ratio was:
Sample output (truncated):
ccache hit: lib/expression/unary_minus_expression.cpp
ccache miss: lib/expression/in_expression.cpp
ccache miss: lib/expression/arithmetic_expression.cpp
=== 249 files, 248 cache misses (0.995984 %)===
Script:
#!/usr/bin/env python3
from pathlib import Path
import re
import os
files = {}
for filename in Path('src').rglob('*.ccache-log'):
with open(filename, 'r') as file:
for line in file:
source_file_match = re.findall(r'Source file: (.*)', line)
if source_file_match:
source_file = source_file_match[0]
result_match = re.findall(r'Result: cache (.*)', line)
if result_match:
result = result_match[0]
files[source_file] = result
break
if len(files) == 0:
print("No *.ccache-log files found. Did you compile with ccache and the environment variable CCACHE_DEBUG=1?")
sys.exit(1)
common_path_prefix = os.path.commonprefix(list(files.keys()))
files_shortened = {}
misses = 0
for file in files:
shortened = file.replace(common_path_prefix, '')
if files[file] == 'miss':
misses += 1
print("ccache miss: %s" % (shortened))
print("\n=== %i files, %i cache misses (%f %%)===\n" % (len(files), misses, float(misses) / len(files) * 100))
Note that this takes all ccache-log files into account, not only those of the last build. If you want the latter, simply remove the log files first.

Related

Yocto rust recipe also produces -native output that needs packaging

I tried this approach on hardknott but I couldn't get it to work recipe also produces -native output that needs packaging
It is a rust recipe that generates an x86_64 app which I would like to package the right way in sdk, so that it can be used.
I can separate the main package to -native-bin, and I see it in the recipe-sysroot, but I can't get it to populate the recipe-sysroot of the workdir of the file when building the -native-helper recipe. And I suspect the reason is that I get an error that the main recipe for x86_64 can't be found?
ERROR: Manifest xxxxxx.populate_sysroot not found in vs_imx8mp cortexa53 armv8a-crc armv8a aarch64 allarch x86_64_x86_64-nativesdk (variant '')?
So any helpful information would be appreciated!
Hacked like this:
Recipe.bb:
do_install_append() {
# Set permision without run flag so that it doesn't fail on checks
chmod 644 ${D}/usr/bin/#RECIPE#-compiler
}
# #RECIPE# generates a compiler during the target generation step
#separate this to the -native-bin package, and skip the ARCH checks
#also in the image file for stations_sdk move the app to right dir and add execute flag
PACKAGES_prepend = "${PN}-native-bin "
PROVIDES_prepend = "${PN}-native-bin "
INSANE_SKIP_${PN}-native-bin = "arch"
FILES_${PN}-native-bin = "/usr/bin/#RECIPE#-compiler"
SYSROOT_DIRS += "/"
Image.bb:
# #RECIPE# produces a compiler that is produced as a part of the target generation
#so we use the recipe and hack it to supply the -compiler as part of the
#host binaries
TOOLCHAIN_TARGET_TASK_append = " #RECIPE#-native-bin"
do_fix_#RECIPE#() {
mv ${SDK_OUTPUT}/${SDKTARGETSYSROOT}/usr/bin/#RECIPE#-compiler ${SDK_OUTPUT}/${SDKPATHNATIVE}/usr/bin/#RECIPE#-compiler
chmod 755 ${SDK_OUTPUT}/${SDKPATHNATIVE}/usr/bin/#RECIPE#-compiler
}
SDK_POSTPROCESS_COMMAND_prepend = "do_fix_#RECIPE#; "
This produces at the end the binary in the right directory

How can I get "HelloWorld - BitBake Style" working on a newer version of Yocto?

In the book "Embedded Linux Systems with the Yocto Project", Chapter 4 contains a sample called "HelloWorld - BitBake style". I encountered a bunch of problems trying to get the old example working against the "Sumo" release 2.5.
If you're like me, the first error you encountered following the book's instructions was that you copied across bitbake.conf and got:
ERROR: ParseError at /tmp/bbhello/conf/bitbake.conf:749: Could not include required file conf/abi_version.conf
And after copying over abi_version.conf as well, you kept finding more and more cross-connected files that needed to be moved, and then some relative-path errors after that... Is there a better way?
Here's a series of steps which can allow you to bitbake nano based on the book's instructions.
Unless otherwise specified, these samples and instructions are all based on the online copy of the book's code-samples. While convenient for copy-pasting, the online resource is not totally consistent with the printed copy, and contains at least one extra bug.
Initial workspace setup
This guide assumes that you're working with Yocto release 2.5 ("sumo"), installed into /tmp/poky, and that the build environment will go into /tmp/bbhello. If you don't the Poky tools+libraries already, the easiest way is to clone it with:
$ git clone -b sumo git://git.yoctoproject.org/poky.git /tmp/poky
Then you can initialize the workspace with:
$ source /tmp/poky/oe-init-build-env /tmp/bbhello/
If you start a new terminal window, you'll need to repeat the previous command which will get get your shell environment set up again, but it should not replace any of the files created inside the workspace from the first time.
Wiring up the defaults
The oe-init-build-env script should have just created these files for you:
bbhello/conf/local.conf
bbhello/conf/templateconf.cfg
bbhello/conf/bblayers.conf
Keep these, they supersede some of the book-instructions, meaning that you should not create or have the files:
bbhello/classes/base.bbclass
bbhello/conf/bitbake.conf
Similarly, do not overwrite bbhello/conf/bblayers.conf with the book's sample. Instead, edit it to add a single line pointing to your own meta-hello folder, ex:
BBLAYERS ?= " \
${TOPDIR}/meta-hello \
/tmp/poky/meta \
/tmp/poky/meta-poky \
/tmp/poky/meta-yocto-bsp \
"
Creating the layer and recipe
Go ahead and create the following files from the book-samples:
meta-hello/conf/layer.conf
meta-hello/recipes-editor/nano/nano.bb
We'll edit these files gradually as we hit errors.
Can't find recipe error
The error:
ERROR: BBFILE_PATTERN_hello not defined
It is caused by the book-website's bbhello/meta-hello/conf/layer.conf being internally inconsistent. It uses the collection-name "hello" but on the next two lines uses _test suffixes. Just change them to _hello to match:
# Set layer search pattern and priority
BBFILE_COLLECTIONS += "hello"
BBFILE_PATTERN_hello := "^${LAYERDIR}/"
BBFILE_PRIORITY_hello = "5"
Interestingly, this error is not present in the printed copy of the book.
No license error
The error:
ERROR: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb: This recipe does not have the LICENSE field set (nano)
ERROR: Failed to parse recipe: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
Can be fixed by adding a license setting with one of the values that bitbake recognizes. In this case, add a line onto nano.bb of:
LICENSE="GPLv3"
Recipe parse error
ERROR: ExpansionError during parsing /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
[...]
bb.data_smart.ExpansionError: Failure expanding variable PV_MAJOR, expression was ${#bb.data.getVar('PV',d,1).split('.')[0]} which triggered exception AttributeError: module 'bb.data' has no attribute 'getVar'
This is fixed by updating the special python commands being used in the recipe, because #bb.data was deprecated and is now removed. Instead, replace it with #d, ex:
PV_MAJOR = "${#d.getVar('PV',d,1).split('.')[0]}"
PV_MINOR = "${#d.getVar('PV',d,1).split('.')[1]}"
License checksum failure
ERROR: nano-2.2.6-r0 do_populate_lic: QA Issue: nano: Recipe file fetches files and does not have license file information (LIC_FILES_CHKSUM) [license-checksum]
This can be fixed by adding a directive to the recipe telling it what license-info-containing file to grab, and what checksum we expect it to have.
We can follow the way the recipe generates the SRC_URI, and modify it slightly to point at the COPYING file in the same web-directory. Add this line to nano.bb:
LIC_FILES_CHKSUM = "${SITE}/v${PV_MAJOR}.${PV_MINOR}/COPYING;md5=f27defe1e96c2e1ecd4e0c9be8967949"
The MD5 checksum in this case came from manually downloading and inspecting the matching file.
Done!
Now bitbake nano ought to work, and when it is complete you should see it built nano:
/tmp/bbhello $ find ./tmp/deploy/ -name "*nano*.rpm*"
./tmp/deploy/rpm/i586/nano-dbg-2.2.6-r0.i586.rpm
./tmp/deploy/rpm/i586/nano-dev-2.2.6-r0.i586.rpm
I have recently worked on that hands-on hello world project. As far as I am concerned, I think that the source code in the book contains some bugs. Below there is a list of suggested fixes:
Inheriting native class
In fact, when you build with bitbake that you got from poky, it builds only for the target, unless you mention in your recipe that you are building for the host machine (native). You can do the latter by adding this line at the end of your recipe:
inherit native
Adding license information
It is worth mentioning that the variable LICENSE is important to be set in any recipe, otherwise bitbake rises an error. In our case, we try to build the version 2.2.6 of the nano editor, its current license is GPLv3, hence it should be mentioned as follow:
LICENSE = "GPLv3"
Using os.system calls
As the book states, you cannot dereference metadata directly from a python function. Which means it is mandatory to access metadata through the d dictionary. Bellow, there is a suggestion for the do_unpack python function, you can use its concept to code the next tasks (do_configure, do_compile):
python do_unpack() {
workdir = d.getVar("WORKDIR", True)
dl_dir = d.getVar("DL_DIR", True)
p = d.getVar("P", True)
tarball_name = os.path.join(dl_dir, p+".tar.gz")
bb.plain("Unpacking tarball")
os.system("tar -x -C " + workdir + " -f " + tarball_name)
bb.plain("tarball unpacked successfully")
}
Launching the nano editor
After successfully building your nano editor package, you can find your nano executable in the following directory in case you are using Ubuntu (arch x86_64):
./tmp/work/x86_64-linux/nano/2.2.6-r0/src/nano
Should you have any comments or questions, Don't hesitate !

unoconv fails to save in my specified directory

I am using unoconv to convert an ods spreadsheet to a csv file.
Here is the command:
unoconv -vvv --doctype=spreadsheet --format=csv --output= ~/Dropbox
/mariners_site/textFiles/expenses.csv ~/Dropbox/Aldeburgh/expenses
/expenses.ods
It saves the output file in the same directory as the source file, not in the specified directory. The error message is:
Output file: /home/richard/Dropbox/mariners_site/textFiles/expenses.csv
unoconv: UnoException during export phase:
Unable to store document to file:///home/richard/Dropbox/mariners_site
/textFiles/expenses.csv (ErrCode 19468)
I'm sure that this worked initially, but it has since stopped.
I have checked for permissions and they are identical for both directories.
I translated ErrCode 19468 for you and it boils down to meaning ERRCODE_SFX_DOCUMENTREADONLY.
You can find more information about the specific meaning of LibreOffice ErrCode numbers from the unoconv documentation at: https://github.com/dagwieers/unoconv/blob/master/doc/errcode.adoc
The clue here is that you have a whitespace-character between --output= and the filename (--output= ~/Dropbox
/mariners_site/textFiles/expenses.csv) and because of that unoconv gets an empty output value (which means the current directory) and is given 2 files. And that explains why you get this specific error IMO

Copy all files with given extension to output directory using CMake

I've seen that I can use this command in order to copy a directory using cmake:
file(COPY "myDir" DESTINATION "myDestination")
(from this post)
My problem is that I don't want to copy all of myDir, but only the .h files that are in there. I've tried with
file(COPY "myDir/*.h" DESTINATION "myDestination")
but I obtain the following error:
CMake Error at CMakeLists.txt:23 (file):
file COPY cannot find
"/full/path/to/myDIR/*.h".
How can I filter the files that I want to copy to a destination folder?
I've found the solution by myself:
file(GLOB MY_PUBLIC_HEADERS
"myDir/*.h"
)
file(COPY ${MY_PUBLIC_HEADERS} DESTINATION myDestination)
this also works for me:
install(DIRECTORY "myDir/"
DESTINATION "myDestination"
FILES_MATCHING PATTERN "*.h" )
The alternative approach provided by jepessen does not take into account the fact that sometimes the number of files to be copied is too high. I encountered the issue when doing such thing (more than 110 files)
Due to a limitation on Windows on the number of characters (2047 or 8191) in a single command line, this approach may randomly fail depending on the number of headers that are in the folder. More info here https://support.microsoft.com/en-gb/help/830473/command-prompt-cmd-exe-command-line-string-limitation
Here is my solution:
file(GLOB MY_HEADERS myDir/*.h)
foreach(CurrentHeaderFile IN LISTS MY_HEADERS)
add_custom_command(
TARGET MyTarget PRE_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CurrentHeaderFile} ${myDestination}
COMMENT "Copying header: ${CurrentHeaderFile}")
endforeach()
This works like a charm on MacOS. However, if you have another target that depends on MyTarget and needs to use these headers, you may have some compile errors due to not found includes on Windows. Therefore you may want to prefer the following option that defines an intermediate target.
function (CopyFile ORIGINAL_TARGET FILE_PATH COPY_OUTPUT_DIRECTORY)
# Copy to the disk at build time so that when the header file changes, it is detected by the build system.
set(input ${FILE_PATH})
get_filename_component(file_name ${FILE_PATH} NAME)
set(output ${COPY_OUTPUT_DIRECTORY}/${file_name})
set(copyTarget ${ORIGINAL_TARGET}-${file_name})
add_custom_target(${copyTarget} DEPENDS ${output})
add_dependencies(${ORIGINAL_TARGET} ${copyTarget})
add_custom_command(
DEPENDS ${input}
OUTPUT ${output}
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${input} ${output}
COMMENT "Copying file to ${output}."
)
endfunction ()
foreach(HeaderFile IN LISTS MY_HEADERS)
CopyFile(MyTarget ${HeaderFile} ${myDestination})
endforeach()
The downside indeed is that you end up with multiple target (one per copied file) but they should all end up together (alphabetically) since they start with the same prefix ORIGINAL_TARGET -> "MyTarget"

Where / how / in what context is ipython's configuration file executed?

Where is ipython's configuration file, which starts with c = get_config(), executed? I'm asking because I want to understand what order things are done in ipython, e.g. why certain commands will not work if included as c.InteractiveShellApp.exec_lines.
This is related to my other question, Log IPython output?, because I want access to a logger attribute, but I can't figure out how to access it in the configuration file, and by the time exec_lines are run, the logger has already started (it's too late).
EDIT: I've accepted a solution based on using a startup file in ipython0.12+. Here is my implementation of that solution:
from time import strftime
import os.path
ip = get_ipython()
#ldir = ip.profile_dir.log_dir
ldir = os.getcwd()
fname = 'ipython_log_' + strftime('%Y-%m-%d') + ".py"
filename = os.path.join(ldir, fname)
notnew = os.path.exists(filename)
try:
ip.magic('logstart -o %s append' % filename)
if notnew:
ip.logger.log_write( u"########################################################\n" )
else:
ip.logger.log_write( u"#!/usr/bin/env python\n" )
ip.logger.log_write( u"# " + fname + "\n" )
ip.logger.log_write( u"# IPython automatic logging file\n" )
ip.logger.log_write( u"# " + '# Started Logging At: '+ strftime('%Y-%m-%d %H:%M:%S\n') )
ip.logger.log_write( u"########################################################\n" )
print " Logging to "+filename
except RuntimeError:
print " Already logging to "+ip.logger.logfname
There are only two subtle differences from the proposed solution linked:
1. saves log to cwd instead of some log directory (though I like that more...)
2. ip.magic_logstart doesn't seem to exist, instead one should use ip.magic('logstart')
The config system sets up a special namespace containing the get_config() function, runs the config file, and collects the values to apply them to the objects as they're created. Referring to your previous question, it doesn't look like there's a configuration value for logging output. You may want to start logging yourself after config, when you can control it more precisely. See this example of starting logging automatically.
Your other question mentions that you're limited to 0.10.2 on one system: that has a completely different config system that won't even look at the same file.