Using fakechroot in buildroot postscript script - buildroot

I am building kernel and ramdisk image for ARM target using buildroot. I want to provide a certificate for the ARM target, which requires me to create a cert request from within the ARM target (I guess using fakechroot, and openssl from within chroot environment). Should I be able to do this (possibly from a postscript) ? I see buildroot generates out/build/host-fakeroot-$(version) folder, where there is a faked, there is also fakechroot installed thru a debian package on my system (ubuntu). But I am clueless as to what command needs to be used to generate the request from within chroot, run purely host-side commands to generate the cert, and then concatenate and put the cert back into target, this requires me to go in and out of chroot if i will be doing everything from a postscript.
Thanks
Ratin

Related

yocto SDK integration with VSCode

Is there a way or steps to follow to integrate a yocto SDK (standard or extensible) with VSCode? I want to cross-compile, remote connect, and debug a C/C++ application within VSCode for target hardware using a yocto generated Linux image. Is this possible? I know of the bitbake extension but couldn't find one for the SDK. Thank you!
Conservatively, I would say it depends on the level of integration you want to achieve but I use regularly VS Code to edit and build, sometime to debug C applications using a Yocto toolchain, that's really easy for Makefile projects for example.
Assuming you do not ask for Yocto integration into VS Code (I don't know if something exists) but really to use the tools generated by the SDK from Yocto and that you already are familiar with Yocto toolchain usage.
I personally compile on Linux server remotely from a Windows PC. The server contains therefore my projects and the Yocto toolchain.
I use for that the nice Remote SSH extension from Microsoft on VS Code. From there, I can edit easily the files, compile and a terminal is available (that's out of the scope of your question however).
So if working as me or directly in Linux, you can create a Makefile/CMake project for example. The C/C++ VS Code extension is a must have.
Each time you start working, you source the Yocto SDK toolchain and compile directly using make from the terminal window of VSCode. If you want to automatize the build step, you can use the task feature of VS which allows you to launch build script for example.
Regarding the remote connect, the terminal window of VS can also have multiple sub-windows with various connections like SSH to the target. The build script can also use scp to send the generated binary directly to the target but your question is vague regarding what you want to do.
Finally for the debug aspect, GDB is well supported in VS Code and the official doc is a good start as well as the C++ debugging doc.
On the Yocto side, you need to add gdbserver to the image running on the target, it can be done by adding the following to your conf/local.conf:
EXTRA_IMAGE_FEATURES += "tools-debug"
If you want to have debug information for the shared libs on the target, you also need to add:
EXTRA_IMAGE_FEATURES += "dbg-pkgs"
Finally, the SDK must be generated with the same options as the image running on the target and will contain the cross-gdb tool like -gdb to be used on the host side.
So that's possible but requires some setup especially the debug part. As far as I know, there is not a VS Code extension managing all these steps for you automagically.

Code coverage with qemu

I recently started using qemu and it's a great tool when you don't have the required hardware to run your firmware (currently using it for cortex-m3).
Now what I want to do is to do some test coverage with it. I tried using GNUC ARM Eclipse, and I've been successfull compiling and executing the code in qemu, but whenever I add the -fprofile-arcs -ftest-coverage flags (for the project and then for the desired file to run coverage) I am able to create the .gcno file, which means that after executing my code it will generate a .gcda file and then I should be able to see the coverage.
Thtat's where everything goes wrong. I was able to generate a .gcda file but whenever I try to open any of them, eclipse tells me that it wasn't able to open the file because it was null. I've tried replicating the procedure in another computer, but I haven't been successful creating the gcda file (probably different binaries).
At this point I don't really know how to proceed. Should I abandon ARM Eclipse and stick to makefiles (is it possibll to run gcov this way?) or am I missing something really small that is fixable?
PS: I using windows 7 64 bits, the latest versions available on the GNU ARM Eclipse website. Also the idea of doing it via makefiles just occurred to me (it was a stresfull day, it's really late) so I haven't tried it yet, I've only tried executing the code, but without coverage.
As far as I know, qemu is not able to generate DWARF information. But there is a project with the proposal of code coverage with qemu: Couverture Project
When you use qemu as a user space emulator (see also the qemu documentation) you actually can measure the code coverage as usual. In this mode qemu has the full access to the host file system.
For a CMake project you can simply use the CROSSCOMPILING_EMULATOR property of your test executable, e.g.:
if(CMAKE_CROSSCOMPILING)
set_target_properties(mytest
PROPERTIES
CROSSCOMPILING_EMULATOR "qemu-${CMAKE_SYSTEM_PROCESSOR};-L;$ENV{SDKTARGETSYSROOT}"
)
endif()
With this setting ctest will use qemu for running the test and will write the .gcda files to the usual location in your build directory.

strong naming for microsoft enterprise library

I am using Microsoft enterprise library in one of my projects. I need to strong name one of the dlls which is Microsoft.Practices.EnterpriseLibrary.Common. But it is not working.
When I decompile using ILDASM, it generates 3 files.
IL file
.RESOURCES file
Common resource script file
How do I compile it with the key file. Which ILASM command should I use?
The dll's are distributed from the original install in a few different modes. One set of files is already signed, so you need to find that set and use the files from that set.
When you install the EntLib package, you get the compiled binaries (some are signed) AND you get the source code, which compiles the source-code and creates the dlls (not signed).
My guess is that you are using the non-signed (compiled from the source code on your local machine) files, instead of the signed ones.

Moses server installation

I have installed moses successfully, I have also install xmlrpc-c via
sudo apt-get install libxmlrpc-core-c3 then I have built the moses via
./bjam --with-xmlrpc-c=[/path/to/xmlrpc-c-config]. While doing these I have followed the instructions in http://www.statmt.org/moses/?n=Development.GetStarted. Up to that point, I guess everything was correct. From now, I need to connect to the machine where moses is installed, however I could not start the mosesserver. What should I do with the file in mosesdecoder/contrib/server/mosesserver.cpp. I think after the build an executable should be created in there , or am I going to compile it manually? Btw, this is the remote version: x86_64 x86_64 x86_64 GNU/Linux.
Thanks in advance...
"mosesserver" binary executable is located in mosesdecoder/bin directory after successful compilation.
It can be started in a similar fashion to moses, i.e.
/path/to/mosesserver -f /path/to/moses.ini
It will run a web server on port 8080 by default, expecting XML-RPC v2 protocol to communicate.
For building, make sure you have Boost libraries (+ devel package) installed at a location where they can be found (i.e. /lib or /usr/lib or lib64, depending on the system) or add the path to LD_LIBRARY_PATH if you compile them manually.

com0com silent install (test signed com0com.sys shows up as signed in explorer but not in Device Manager)

My goal is to have the com0com serial driver install without popping up the install wizard on both WinXP and Win2000.
I am working on WinXP x86. I have followed the test signing instructions for the com0com driver, replacing amd64 with i386 at line 60.
I have added my test certificate as both a root and trustedprovider using the following commands:
certmgr /add com0com.cer /r localMachine root
certmgr /add com0com.cer /r localMachine trustedprovider
And verified that it is listed under both locations.
I then run the newly built setup.exe. This installs the signed com0com.sys file into C:\WINDOWS\system32\DRIVERS and sets up a pair of virtual serial ports and a bus between them. Using explorer, I go to the DRIVERS directory, right click on the com0com.sys file and verify that it has the "test" digital signature. I then go into Device Manager, open the "com0com serial port emulators" entry, pick an entry and do Properties->Driver and see that it says "Not digitally signed". I click details for the driver and can see that it is referring to the com0com.sys driver file that I just confirmed is signed.
I found what might be a related issue but I'm not sure. Does WinXP demand a WHQL signature? If so, does that explain why the com0com.sys file is signed but the device driver entries say they aren't signed?
Yes, when talking about drivers, Windows 2000 and Windows XP has only one certain signature in mind -- the WHQL signature. Without putting the com0com driver through the WHQL process, it simply won't be considered signed.
The instructions in Building.txt in relation to signing are talking about a different "constraint" placed by 64-bit editions of Windows Vista and higher -- they simply won't load drivers which are not signed at all -- but that's unrelated to your problem.