The automated code review service LGTM uses "wrapper scripts around the popular build tools like pkg-config, CMake, and qmake" to detect missing files during the build process and to install the corresponding packages automatically.
My project uses CMake, but to get some path from Qt, I need qmake in one place:
get_target_property(qt5_qmake Qt5::qmake IMPORTED_LOCATION)
execute_process(COMMAND ${qt5_qmake} -query QT_INSTALL_TRANSLATIONS OUTPUT_VARIABLE QT_QM_PATH)
These lines work fine everywhere I've tested them, except in the LGTM environment.
In the LGTM environment, the desired information is not put into the QT_QM_PATH, but printed (it's visible in the build log).
I strongly suspect the wrapper causing that since other commands work as expected.
Question: How can I stop LGTM from wrapping qmake or how can I trick cmake to capture the output of the wrapped qmake?
Related
I have set up a recipe of mine to use autotools to make my project. I recently decided to run bitbake with the verbose output turned on. I noticed nowhere in my build or compilation does autogen.sh get called. It goes straight to my configure.ac.
Why is this? I thought autogen.sh was required. Is there a way to make it use autogen.sh?
OE-Core's autotools.bbclass is setup to run autoreconf, libtoolize and the other tools needed itself without using the autogen.sh since they can vary a lot in quality. Your software will have configure regenerated, it just won't use the helper script.
I have a simple CMake find module I've written, for a library of mine used by other projects. It's pretty simplistic, with its full text available here. Mainly there's one find_path() and one find_library(), and then some variables are set.
Now, I want CMake, when trying to find my package, to fall back on:
git-cloning or downloading the package/library from its GitHub repository,
Unpacking the archive, if it was a download
Building the package, either be using the running CMake itself somehow (the package has its own CMakeLists.txt), or by running an arbitrary shell command in the directory into which the packages was downloaded/cloned
The specifics of what happens post-download are less important to me than actually having a download fall-back.
How can I / how should I make this happen?
Notes:
Of course if the download/git clone fails, than finding the package has failed.
No need to worry about specific versions at the repo, although you can if you want to.
I was wondering if there is a way to get Bazel to list, output, display, etc., all of the commands that can be executed from a command line that are run during a build after a clean. I do not care if the output is to the screen, in a file, etc. I will massage it into a usable form if necessary.
I have captured the screen output during a run of Bazel which gives me an idea of what is being done, however it does not give me a command I can execute on the command line. The command would have to include all of the command options and not display variables.
If this is not possible, since Bazel is open source, where in the code is/are the lines that represent the commands to be run so that I can modify Bazel to output the executable commands.
I am aware of the query command within Bazel, and used it generate the dependency diagram. If this could be done as a query command it would be even better.
TLDR;
My goal is to build TensorFlow using Bazel on Windows. Yes I know of all of the problems and reasons NOT to do it and have successfully installed TensorFlow on Windows via a Virtual Machine or Docker. I did take a shot at building Bazel on Windows starting with Cygwin, but that started to get out of hand as I am use to installing with packages and Cygwin doesn't play nice with packages, so then I started trying to build Bazel by hand and that was turning into a quagmire. So I am now trying to just build TensorFlow by hand on Windows by duplicating what Bazel would do to build TensorFlow on Linux.
You are correct, you can use the -s (--subcommands) option:
bazel build -s //foo
See https://docs.bazel.build/versions/master/user-manual.html#flag--subcommands.
For your use case, you'd probably want to redirect the output to a file and then global replace any library/binary paths to the Windows equivalents.
You might want to track https://github.com/bazelbuild/bazel/issues/276 (Windows support), although it'll probably be a while.
(Disclaimer: This solution does not print the commands that currently get executed but the commands that would get or got executed.)
I'd use aquery (action graph query) (forget about "graph"):
bazel aquery //foo
Advantages:
It's very fast, because it prints the actions without executing the build.
It's a query. It does not have side effects.
You don't have to do a bazel clean before in order to find out the build steps for a library that has already been built.
It prints information about the specific build step that you request. It does not print all the build commands required for the dependencies.
I recently started using qemu and it's a great tool when you don't have the required hardware to run your firmware (currently using it for cortex-m3).
Now what I want to do is to do some test coverage with it. I tried using GNUC ARM Eclipse, and I've been successfull compiling and executing the code in qemu, but whenever I add the -fprofile-arcs -ftest-coverage flags (for the project and then for the desired file to run coverage) I am able to create the .gcno file, which means that after executing my code it will generate a .gcda file and then I should be able to see the coverage.
Thtat's where everything goes wrong. I was able to generate a .gcda file but whenever I try to open any of them, eclipse tells me that it wasn't able to open the file because it was null. I've tried replicating the procedure in another computer, but I haven't been successful creating the gcda file (probably different binaries).
At this point I don't really know how to proceed. Should I abandon ARM Eclipse and stick to makefiles (is it possibll to run gcov this way?) or am I missing something really small that is fixable?
PS: I using windows 7 64 bits, the latest versions available on the GNU ARM Eclipse website. Also the idea of doing it via makefiles just occurred to me (it was a stresfull day, it's really late) so I haven't tried it yet, I've only tried executing the code, but without coverage.
As far as I know, qemu is not able to generate DWARF information. But there is a project with the proposal of code coverage with qemu: Couverture Project
When you use qemu as a user space emulator (see also the qemu documentation) you actually can measure the code coverage as usual. In this mode qemu has the full access to the host file system.
For a CMake project you can simply use the CROSSCOMPILING_EMULATOR property of your test executable, e.g.:
if(CMAKE_CROSSCOMPILING)
set_target_properties(mytest
PROPERTIES
CROSSCOMPILING_EMULATOR "qemu-${CMAKE_SYSTEM_PROCESSOR};-L;$ENV{SDKTARGETSYSROOT}"
)
endif()
With this setting ctest will use qemu for running the test and will write the .gcda files to the usual location in your build directory.
My eclipse tries to compile/build Perl files in my Java project and fails. I installed Perl EPIC just for syntax colouring, how can I get it to ignore errors?
I tried going into Project->Properties->Builders, and uncheck Perl Epic, this didn't change anything.
I'm using Eclipse :Helios Service Release 1
Build id: 20100917-0705
On Windows XP
I have basically the same issue as this question,
How can I set up Eclipse to edit Perl without the runtime checking?
I've been looking into similar issue for quite some time too. Apparently the Epic Perl plugin goes wildly checking anything/folder/file it finds inside the project, so like mine where I have config files, data directories, it goes inside and tries to validate "perl stuff", which evidently is an annoyance: the error log view displays a lot of useless information.
Did you try to uncheck the "Perl Auto Builder" ?
I'm not parsing this sentence in the context of your question: "My eclipse tries to compile/build perl files in my java project and fails."
Are you saying that you are running perl as a java project, and getting the inevitable error message because it is not java? Just wondering why you don't want to have your perl program set up as a perl project possibly referenced by your java project, assuming that that is what you are trying to do.
Generally, when I set up a perl project, I edit its properties and set its includes to match the current directory or local module paths. Assuming that there are self-written modules I must call, and they are not located on this machine (e.g. I wouldn't have FOO::smb on a windows machine -- it makes no sense. When I am developing for linux, I will put all my functions in there for convenience's sake)
In that case, I create a FOO directory in the workspace, and create a dummy FOO::smb module with however many stub functions in it to get me going and let my syntax highlighting and error checkign do their proper jobs for me. If I write dummy subs to match the real modules well enough, I can debug my scripts somewhat before uploading them. I figure that I should be well enough aware of what they are supposed to do anyway.
I will go so far as to dummy out CPAN modules assuming that installing them on my development workstation makes no sense or is impossible. Highlighting and syntax checking are both invaluable tools, and finding a way to make both of them work saves my sanity.