How do I use the "Simics Training" and "QSP CPU" packages? - simics

1 - There's a "Simics Training" package shown in the package manager, and a "targets\simics-user-training" and " targets\workshop-01". Where is the documentation about starting up and going through these trainings? (I assume this is different than just the normal "my-simics-project-1/documentation.html" documentation, because that documentation doesn't ever reference either of those targets in the Getting Started section)
2 - In the documentation there's a line: "The QSP-x86 package contains a legacy processor core which is used by default in the included simulated machines. To use more modern processors, the package QSP-CPU can be installed, which contains recent processor cores." How does one actually use the QSP-CPU to select a different CPU to be simulated? (Related: I see in the release notes a bunch of mentions of ICH10. Is that what the default QSP-x86 "targets\qsp-x86\firststeps.simics" is simulating? Ideally I'd like to simulate at least a PCH-based system.)

#Point 1
If you check the doc/ folder in your SImics project, you should have the lab instructions. It is a bit inconsistent that they are stand-alone PDFs, but that comes from how they are built currently. Look for nut-001 and workshop-01.
#Point 2 (and how come StackOverflow does not have heading styles? You can really use those to write nicely structured answers)
If you have installed everything, use the scripts "qsp-atom-core.simics" etc. to run the standard QSP setup but with a different type of core. For example:
> simics.bat targets\qsp-x86\qsp-client-core.simics
To see how that core is selected, open the script file. For example, to look at the client core script, first type/cat the trampoline script in the project. Then, go and open or cat or type the script file itself. For example:
C:\Users\jengblo\simics-projects\my-simics-project-5>type targets\qsp-x86\qsp-client-core.simics
# Auto-generated file. Any changes will be overwritten!
decl { substitute "C:\\Users\\jengblo\\AppData\\Local\\Programs\\Simics\\simics-qsp-cpu-6.0.1\\targets\\qsp-x86\\qsp-client-core.simics" }
run-command-file "C:\\Users\\jengblo\\AppData\\Local\\Programs\\Simics\\simics-qsp-cpu-6.0.1\\targets\\qsp-x86\\qsp-client-core.simics"
Given that trampoline, go to the actual script file:
C:\Users\jengblo\simics-projects\my-simics-project-5>type C:\\Users\\jengblo\\AppData\\Local\\Programs\\Simics\\simics-qsp-cpu-6.0.1\\targets\\qsp-x86\\qsp-client-core.simics
# In order to run this, the QSP-x86 (2096), QSP-CPU (8112) and
# QSP-Clear-Linux (4094) packages should be installed.
decl {
! Script that runs the Quick Start Platform (QSP) with a client processor core.
params from "%simics%/targets/qsp-x86/qsp-clear-linux.simics"
default cpu_comp_class = "x86-coffee-lake"
default num_cores = 4
}
run-command-file "%simics%/targets/qsp-x86/qsp-clear-linux.simics"
And note how the "cpu_comp_class" parameter is set. The way to find available such classes in a bit obscure, admittedly. In your running Simics session started from the client-core script (for example), check the types of the components inside the motherboard.
simics> list-components board.mb
┌─────────┬─────────────────────────┐
│Component│Class │
├─────────┼─────────────────────────┤
│cpu0 │processor_x86_coffee_lake│
│gpu │pci_accel_vga_comp │
│memory │simple_memory_module │
│nb │northbridge_x58 │
│sb │southbridge_ich10 │
└─────────┴─────────────────────────┘
Note the class of the cpu0 component. To find other classes from the same pattern, use the list-classes command:
simics> list-classes substr = processor_x86
The following classes are available:
┌─────────────────────────────┬──────────────────────────────┐
│ Class │ Short description │
├─────────────────────────────┼──────────────────────────────┤
│processor_x86QSP1 │N/A (module is not loaded yet)│
│processor_x86QSP2 │N/A (module is not loaded yet)│
│processor_x86_airmont │N/A (module is not loaded yet)│
│processor_x86_broadwell_xeon │N/A (module is not loaded yet)│
...
You can then build a custom script to start with a given core. Follow the pattern of "qsp-client-core.simics" as found in the installation. Copy that file into your project, and modify the core class as well as other parameters.

Related

Where on disk is the BIOS file used by Simics?

(I saw one of my previous posts didn't actually answer the "where's the BIOS file used by simics?" question, so I renamed the previous one and am pulling that question out and making it standalone here.)
I can see the BIOS code for a default "targets\qsp-x86\firststeps.simics" invocation by just stepping through the debugger from the start. But if I want to see the full binary, is there a specific file somewhere I can look at?
you can check "bios" attribute on motherboard image:
simics> board.mb->bios
"%simics%/targets/qsp-x86/images/SIMICSX58IA32X64_1_0_0_bp_r.fd"
You can specify what BIOS image to use by bios_image script parameter to qsp-clear-linux.simics scripts.
Help info for the script:
$ ./simics -h targets/qsp-x86/qsp-clear-linux.simics
System:
bios_image - existing file or NIL
BIOS file.
Default value:
"%simics%/targets/qsp-x86/images/SIMICSX58IA32X64_1_0_0_bp_r.fd"
you can run with your own BIOS like this:
$ ./simics -e '$bios_image=my-bios.bin' targets/qsp-x86/qsp-clear-linux.simics
Now the BIOS is not quite handled consistently with some other things. Typically in Simics, disks and similar things are images. You can list them using list-persistent-images and resolve locations using lookup-file:
simics> list-persistent-images
┌─────────────────────┬────────────┬───────────────────────────────────────────────────────┐
│Image │Unsaved data│File(s) (read-only/read-write) │
├─────────────────────┼────────────┼───────────────────────────────────────────────────────┤
│board.disk0.hd_image │ no│%simics%/targets/qsp-x86/images/cl-b28910-v2.craff (ro)│
│board.disk1.hd_image │ no│ │
│board.mb.sb.spi_image│ yes│%simics%/targets/qsp-x86/images/spi-flash.bin (ro) │
└─────────────────────┴────────────┴───────────────────────────────────────────────────────┘
simics> lookup-file "%simics%/targets/qsp-x86/images/spi-flash.bin"
"/disk1/simics-6/simics-qsp-x86-6.0.47/targets/qsp-x86/images/spi-flash.bin"
The BIOS in the QSP is just loaded straight into target memory for execution. Which is a bit of a cheat for convenience.
Upon searching around, I found the following folder:
C:\Users\yourusername\AppData\Local\Programs\Simics\simics-qsp-x86-6.0.44\targets\qsp-x86\images
Inside that folder are the following 3 files:
SIMICSX58IA32X64_1_0_0_bp_r.fd
SIMICSX58IA32X64-ahci.fd
spi-flash. bin
Both SIMICSX58IA32X64_1_0_0_bp_r. fd and SIMICSX58IA32X64-ahci.fd have UEFI filevolume headers at the start, and a seeming BIOS entry point at the end. The spi-flash. bin seems to have a placeholder of the flash descriptor which would go at the start of the flash, but is mostly empty. So I believe Intel basically either stitches these together in memory, or possibly just uses the spi-flash. bin to allow for "soft strap" configuration or somesuch (since it's a virtual MCH/ICH anyway.)

"failed to load any lstm-specific dictionaries for lang " tesseract 4.1

I tried to train the tesseract 4.1 using OCRD project but after training completed I copied the lang.traineddata but getting above error.
The tesseractWiki page is very confusing to understand asking to use combine_lang_model after making lstmf file. So Actually I have the lstmf file. I created these file by using tif/box pair.
Please help me for further step.
Related discussions:Failed to load any lstm-specific dictionaries for lang xxx
Suppose your training folder like this:
OCRD/makefile
OCRD/data/foo-ground-truth.
You could try as following steps:
Find the WORDLIST_FILE/NUMBERS_FILE/PUNC_FILE in the makefile, and change them to:
WORDLIST_FILE := data/$(MODEL_NAME).wordlist
NUMBERS_FILE := data/$(MODEL_NAME).numbers
PUNC_FILE := data/$(MODEL_NAME).punc
Suppose your base traineddata is eng.traineddata.
2.1 Download the .wordlist/.numbers/.punc files from the langdata_lstm.
2.2 Place them in OCRD/data
2.3 if the MODEL_NAME = foo, rename them as: foo.wordlist, foo.numbers, foo.punc
if you don't have the base traineddata, you could try this too. But if your base traineddata is afr, you should download the files from langdata_lstm/afr.
make training again
The cause of this error:
In OCRD, the default path of the above three files is $ (OUTPUT_DIR) = data / $ (MODEL_NAME), and all files in this path are automatically generated during the training process.
If the variable START_MODEL is not assigned, the makefile will not generate any related files under this path;
If the variable START_MODEL has been assigned, the foo.lstm-number-dawg、foo.lstm-punc-dawg、foo.lstm-word-dawg and so on will be produced in data / $ (MODEL_NAME). But they are not the right one. So there may be a bug in OCRD.

Yocto find the recipe or class that defines a task

I am a yocto noob, trying to decipher how the device tree is built from a Xilinx hardware definition (.hdf) file. But my question is more general.
Is there a yocto way to find the source of task?
Given a task name is it possible to find where the tasks source code lives? (presumably in a recipe or class)
As an example, where is the source for the Python task do_create_yaml which is called by recipes in the meta-xilinx-bsp layer that compile the device tree blob?
bitbake -e device-tree
Will dump the python source for do_create_yaml (amongst the rest of it prodigious output) but how can I find where that is coming from?
Device tree is part of Linux Kernel. In Yocto, this is compiled from KERNEL_DEVICETREE variable value either defined as part of Linux Kernel recipe or machine configuration.
For example, for cubieboard7 as defined here,
KERNEL_DEVICETREE = "s700_cb7_linux.dtb"
instructs the compilation to use this dts file for compilation. This is done by yocto by using various classes.
In our example, we inherit kernel.bbclass which in turn inherits kernel-devicetree.bbclass, in this class (copied from kernel-devicetree.bbclass),
do_compile_append() {
for dtbf in ${KERNEL_DEVICETREE}; do
dtb=`normalize_dtb "$dtbf"`
oe_runmake $dtb
done
}
do_install_append() {
for dtbf in ${KERNEL_DEVICETREE}; do
dtb=`normalize_dtb "$dtbf"`
dtb_ext=${dtb##*.}
dtb_base_name=`basename $dtb .$dtb_ext`
dtb_path=`get_real_dtb_path_in_kernel "$dtb"`
install -m 0644 $dtb_path ${D}/${KERNEL_IMAGEDEST}/$dtb_base_name.$dtb_ext
done
}
do_deploy_append() {
for dtbf in ${KERNEL_DEVICETREE}; do
dtb=`normalize_dtb "$dtbf"`
this appends tasks to compile, install and deploy tasks. So defining KERNEL_DEVICETREE enables the automatic build of dtb.
I found that the datastore contains the filename for tasks as a VarFlag,
from a devpyshell
pydevshell> d.getVarFlags("do_create_yaml")
gives
{'filename': '.....yocto/sources/core/../meta-xilinx-tools/classes/xsctyaml.bbclass', 'lineno': '61', 'func': 1, 'task': 1, 'python': '1', 'deps': ['do_prepare_recipe_sysroot']}
So for the example in my question the active definition for the do_create_yaml task is in xsctyaml.bbclass.

How can I get "HelloWorld - BitBake Style" working on a newer version of Yocto?

In the book "Embedded Linux Systems with the Yocto Project", Chapter 4 contains a sample called "HelloWorld - BitBake style". I encountered a bunch of problems trying to get the old example working against the "Sumo" release 2.5.
If you're like me, the first error you encountered following the book's instructions was that you copied across bitbake.conf and got:
ERROR: ParseError at /tmp/bbhello/conf/bitbake.conf:749: Could not include required file conf/abi_version.conf
And after copying over abi_version.conf as well, you kept finding more and more cross-connected files that needed to be moved, and then some relative-path errors after that... Is there a better way?
Here's a series of steps which can allow you to bitbake nano based on the book's instructions.
Unless otherwise specified, these samples and instructions are all based on the online copy of the book's code-samples. While convenient for copy-pasting, the online resource is not totally consistent with the printed copy, and contains at least one extra bug.
Initial workspace setup
This guide assumes that you're working with Yocto release 2.5 ("sumo"), installed into /tmp/poky, and that the build environment will go into /tmp/bbhello. If you don't the Poky tools+libraries already, the easiest way is to clone it with:
$ git clone -b sumo git://git.yoctoproject.org/poky.git /tmp/poky
Then you can initialize the workspace with:
$ source /tmp/poky/oe-init-build-env /tmp/bbhello/
If you start a new terminal window, you'll need to repeat the previous command which will get get your shell environment set up again, but it should not replace any of the files created inside the workspace from the first time.
Wiring up the defaults
The oe-init-build-env script should have just created these files for you:
bbhello/conf/local.conf
bbhello/conf/templateconf.cfg
bbhello/conf/bblayers.conf
Keep these, they supersede some of the book-instructions, meaning that you should not create or have the files:
bbhello/classes/base.bbclass
bbhello/conf/bitbake.conf
Similarly, do not overwrite bbhello/conf/bblayers.conf with the book's sample. Instead, edit it to add a single line pointing to your own meta-hello folder, ex:
BBLAYERS ?= " \
${TOPDIR}/meta-hello \
/tmp/poky/meta \
/tmp/poky/meta-poky \
/tmp/poky/meta-yocto-bsp \
"
Creating the layer and recipe
Go ahead and create the following files from the book-samples:
meta-hello/conf/layer.conf
meta-hello/recipes-editor/nano/nano.bb
We'll edit these files gradually as we hit errors.
Can't find recipe error
The error:
ERROR: BBFILE_PATTERN_hello not defined
It is caused by the book-website's bbhello/meta-hello/conf/layer.conf being internally inconsistent. It uses the collection-name "hello" but on the next two lines uses _test suffixes. Just change them to _hello to match:
# Set layer search pattern and priority
BBFILE_COLLECTIONS += "hello"
BBFILE_PATTERN_hello := "^${LAYERDIR}/"
BBFILE_PRIORITY_hello = "5"
Interestingly, this error is not present in the printed copy of the book.
No license error
The error:
ERROR: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb: This recipe does not have the LICENSE field set (nano)
ERROR: Failed to parse recipe: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
Can be fixed by adding a license setting with one of the values that bitbake recognizes. In this case, add a line onto nano.bb of:
LICENSE="GPLv3"
Recipe parse error
ERROR: ExpansionError during parsing /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
[...]
bb.data_smart.ExpansionError: Failure expanding variable PV_MAJOR, expression was ${#bb.data.getVar('PV',d,1).split('.')[0]} which triggered exception AttributeError: module 'bb.data' has no attribute 'getVar'
This is fixed by updating the special python commands being used in the recipe, because #bb.data was deprecated and is now removed. Instead, replace it with #d, ex:
PV_MAJOR = "${#d.getVar('PV',d,1).split('.')[0]}"
PV_MINOR = "${#d.getVar('PV',d,1).split('.')[1]}"
License checksum failure
ERROR: nano-2.2.6-r0 do_populate_lic: QA Issue: nano: Recipe file fetches files and does not have license file information (LIC_FILES_CHKSUM) [license-checksum]
This can be fixed by adding a directive to the recipe telling it what license-info-containing file to grab, and what checksum we expect it to have.
We can follow the way the recipe generates the SRC_URI, and modify it slightly to point at the COPYING file in the same web-directory. Add this line to nano.bb:
LIC_FILES_CHKSUM = "${SITE}/v${PV_MAJOR}.${PV_MINOR}/COPYING;md5=f27defe1e96c2e1ecd4e0c9be8967949"
The MD5 checksum in this case came from manually downloading and inspecting the matching file.
Done!
Now bitbake nano ought to work, and when it is complete you should see it built nano:
/tmp/bbhello $ find ./tmp/deploy/ -name "*nano*.rpm*"
./tmp/deploy/rpm/i586/nano-dbg-2.2.6-r0.i586.rpm
./tmp/deploy/rpm/i586/nano-dev-2.2.6-r0.i586.rpm
I have recently worked on that hands-on hello world project. As far as I am concerned, I think that the source code in the book contains some bugs. Below there is a list of suggested fixes:
Inheriting native class
In fact, when you build with bitbake that you got from poky, it builds only for the target, unless you mention in your recipe that you are building for the host machine (native). You can do the latter by adding this line at the end of your recipe:
inherit native
Adding license information
It is worth mentioning that the variable LICENSE is important to be set in any recipe, otherwise bitbake rises an error. In our case, we try to build the version 2.2.6 of the nano editor, its current license is GPLv3, hence it should be mentioned as follow:
LICENSE = "GPLv3"
Using os.system calls
As the book states, you cannot dereference metadata directly from a python function. Which means it is mandatory to access metadata through the d dictionary. Bellow, there is a suggestion for the do_unpack python function, you can use its concept to code the next tasks (do_configure, do_compile):
python do_unpack() {
workdir = d.getVar("WORKDIR", True)
dl_dir = d.getVar("DL_DIR", True)
p = d.getVar("P", True)
tarball_name = os.path.join(dl_dir, p+".tar.gz")
bb.plain("Unpacking tarball")
os.system("tar -x -C " + workdir + " -f " + tarball_name)
bb.plain("tarball unpacked successfully")
}
Launching the nano editor
After successfully building your nano editor package, you can find your nano executable in the following directory in case you are using Ubuntu (arch x86_64):
./tmp/work/x86_64-linux/nano/2.2.6-r0/src/nano
Should you have any comments or questions, Don't hesitate !

Automake, generated source files and VPATH builds

I'm doing VPATH builds with automake. I'm now also using generated source, with SWIG. I've got rules in Makefile.am like:
dist_noinst_DATA = whatever.swig
whatever.cpp: whatever.swig
swig -c++ -php $^
Then the file gets used later:
myprogram_SOURCES = ... whatever.cpp
It works fine when $builddir == $srcdir. But when doing VPATH builds (e.g. mkdir build; cd build; ../configure; make), I get error messages about missing whatever.cpp.
Should generated source files go to $builddir or $srcdir? (I reckon probably $builddir.)
How should dependencies and rules be specified to put generated files in the right place?
Simple answer
You should assume that $srcdir is a read-only, so you must not write anything there.
So, your generated source-code will end up in $(builddir).
By default, autotool-generated Makefiles will only look for source-files in $srcdir, so you have to tell it to check $builddir as well. Adding the following to your Makefile.am should help:
VPATH = $(srcdir) $(builddir)
After that you might end up with a no rule to make target ... error, which you should be able to fix by updating your source-generating rule as in:
$(builddir)/whatever.cpp: whatever.swig
# ...
A better solution
You might notice that in your current setup, the release tarball (as created by make dist) will contain the whatever.cpp file as part of your sources, since you added this file to the myprogram_SOURCES.
If you don't want this (e.g. because it might mean that the build-process will really take the pregenerated file rather than generating it again), you might want to use something like the following.
It uses a wrapper source-file (whatever_includer.cpp) that simply includes the generated file, and it uses -I$(builddir) to then find the generated file.
Makefile.am:
dist_noinst_DATA = whatever.swig
whatever.cpp: whatever.swig
swig -c++ -php $^
whatever_includer.cpp: whatever.cpp
myprogram_SOURCES = ... whatever_includer.cpp
myprogram_CPPFLAGS = ... -I$(builddir)
clean-local::
rm -f $(builddir)/whatever.cpp
whatever_includer.cpp:
#include "whatever.cpp"
Usually, you want to keep $srcdir readonly, so that if for instance the source is distributed unpacked on a CDROM, you can still run /.../configure from some other part of the file-system.
However if you are using SWIG to generate source code for a wrapper library, you probably want to distribute that SWIG-generated code as well so that your users do not need to install SWIG to compile your code. Then you have indeed a choice: you can decide that the SWIG-generated code should end in $builddir (it's OK: make dist will collect it there and include it in the tarball), or you could decide to output SWIG-generated code in $srcdir since it is really a source from the point of view of the distributed package. An advantage of keeping it in $srcdir is that when make distcheck attempts to build your package from a read-only source directory, it will fail on any attempt to call SWIG to regenerate the wrapper source. If you have your wrapper source in $builddir, you might not notice you have some broken rule that cause SWIG to be run on the user's host; by generating in $srcdir you ensure that SWIG is not needed by your users.
So my preference is to output SWIG wrapper sources in $srcdir. My setup for Python wrappers looks as follows:
EXTRA_DIST = spot.i
python_PYTHON = $(srcdir)/spot.py # _PYTHON is distributed by default
pyexec_LTLIBRARIES = _spot.la
MAINTAINERCLEANFILES = $(srcdir)/spot_wrap.cxx $(srcdir)/spot.py
_spot_la_SOURCES = $(srcdir)/spot_wrap.cxx $(srcdir)/spot_wrap.h
_spot_la_LDFLAGS = -avoid-version -module
_spot_la_LIBADD = $(top_builddir)/src/libspot.la
$(srcdir)/spot_wrap.cxx: $(srcdir)/spot.i
$(SWIG) -c++ -python -I$(srcdir) -I$(top_srcdir)/src $(srcdir)/spot.i
# Handle the multi-file output of SWIG.
$(srcdir)/spot.py: $(srcdir)/spot.i
$(MAKE) $(AM_MAKEFLAGS) spot_wrap.cxx
Note that I use $(srcdir) for all targets, because of limitations of the VPATH feature on various flavors of make. My setup to deal with the multiple files output by SWIG could be improved, but as these rules are not run by users and it has never caused me any problem, I do not bother.