Where do the "virtual/..." terms come from? - yocto

In Bitbake I can build e.g. the Linux Kernel with bitbake virtual/kernel or U-Boot with bitbake virtual/bootloader.
Where do those "virtual/..." terms come from?
I used find for patters such as "virtual/kernel" in the poky directory, but there are nearly infinite results and I don't know where to search.
Can I e.g. direct virtual/bootloader to a custom recipe when I might have programmed an own bootloader?

From bitbake user-manual
As an example of adding an extra provider, suppose a recipe named
foo_1.0.bb contained the following:
PROVIDES += "virtual/bar_1.0"
The recipe now provides both "foo_1.0" and "virtual/bar_1.0". The "virtual/" namespace is often used to denote
cases where multiple providers are expected with the user choosing
between them. Kernels and toolchain components are common cases of
this in OpenEmbedded.
Sometimes a target might have multiple providers. A common example is
"virtual/kernel", which is provided by each kernel recipe. Each
machine often selects the best kernel provider by using a line similar
to the following in the machine configuration file:
PREFERRED_PROVIDER_virtual/kernel = "linux-yocto"

Go to your meta-layer/conf/machine/here you can find macros.
your-meta-layer/recipes-bsp/barebox(or U-boot) here you can find bootloader recipes(.bb).

Related

Can Yocto create more than one module from a single recipe?

I am trying to create a set of linux driver modules under yocto. The drivers are fundamentally very similar but for reasons of hardware, have a separate source file for each interface resulting in a set of N modules. All of the drivers share functionality which is contained in set of separate files.
What I would like is that all the module source files and the set of shared files share a directory (or the latter are in a subdirectory) and that the yocto recipe generates a module from each driver source resulting in N separate modules from the one recipe.
Is this feasible or can anyone suggest an alternative that does not require replication of the shared files for each module?

Differences between executable files generated by Dymola and OpenModelica

I am considering to use the executable file generated by either Dymola (dymosim.exe) or OpenModelica (model_name.exe) to make parametric simulations on the same model.
I was wondering, is there any difference in the two .exe files and related input files? (which are dsin.txt for Dymola, and model_name_init.xml for OpenModelica).
Regarding file sizes, I can see that the Dymola files are smaller. But I was also wondering about speed of execution and flexibility of the input files for scripting.
Lastly, since Dymola is a commercial software, is the dymosim.exe file publicly shareable?
I will write this for OpenModelica, the Dymola people can add their own.
I would suggest to use FMUs instead of executables and some (co)simulation framework like OMSimulator (via Python scripting) or some other ones (PyFMI, etc). See an example here:
https://www.openmodelica.org/doc/OMSimulator/master/html/OMSimulatorPython.html#example-pi
Note that if you have resources such as tables, etc, these will be put inside the FMU if you use Modelica URIs: modelica://LibraryName/Resource/blah. However, for the generated executables you would need to ship them with the exe and they would need to be in a specific directory on the other machine. Also, you would need to ship dependent DLLs for the executables for the the FMUs that is (mostly - not true if you have external dlls that you call in your model) not needed as they are statically compiled.
Simulation speed depends on the model sometimes one or the other is faster.
For what libraries are supported by OpenModelica you can check the library coverage:
https://libraries.openmodelica.org/branches/overview-combined.html
If you still want to use executables, here is a list of command line parameters for them: https://www.openmodelica.org/doc/OpenModelicaUsersGuide/latest/simulationflags.html
How to do parameter sweeps via executables:
https://openmodelica.org/doc/OpenModelicaUsersGuide/latest/scripting_api.html#simulation-parameter-sweep
For Dymola:
If you have the appropriate binary export license you can generate a dymosim.exe that can be distributed.
Parameter-sweep can be run inside Dymola (the scripts are automatically generated), or from Python etc.
However, running a parameter sweep in that way does not only use dsin.txt, but also some additional files. There are two reasons:
Reduced overhead of starting/stopping dymosim.exe, especially for small models.
Automatic parallelization.
That part of dymosim is currently not well documented in the manual, but you can run:
dymosim -M Which as default sweeps based on two csv-files (multIn.csv, multOutHeader.csv) generating a third (multOut.csv)
dymosim -M -1 mIn.csv -2 mOutH.csv -3 mOut.csv if you want different file-names
dymosim -M -n 45 To generate normal trajectory files, dsres45.mat, dsres46.mat, ...
dymosim -h For help
dymosim -s Normal simulation
And if you are really bold you can pipe to/from dymosim.exe for parameter sweeps
Another possibility is to FMUs instead.

Yocto deploy Debug or Release prebuild?

I am writing a bitbake recipe to deploy a third party pre-built tool, similar to this wiki page: https://wiki.yoctoproject.org/wiki/TipsAndTricks/Packaging_Prebuilt_Libraries
However, I have a Release and Debug pre-build versions of the tool available as *.so files. How do I distinguish inside the recipe which one of both build types I shall deploy?
Thanks and regards,
Martin
You can have two different virtual recipes each with their own .so file. This then warrants a selection in a configuration file (with PREFERRED_PROVIDER_virtual/my-recipe), so either in a machine or distro configuration file. This is probably preferred if you consider having release and debug distros.
A second option is to install the libraries in two different paths, in two different PACKAGES (use FILES_my-package for that) and make them RCONFLICTS_my-package each other to be sure they can't both be in the rootfs. After that, you could write a pkg_postinst_my-package() task specific to each package that actually move the library from the "different" path to the intended one. This will be run both at build time when creating the rootfs and at runtime on first boot, so you need to make sure to exclude one or the other (it's usually done by checking if ${D} exists, which does at build time but not runtime).
c.f.: http://docs.yoctoproject.org/dev-manual/dev-manual-common-tasks.html#post-installation-scripts
If you can manage to have both libraries installed in your rootfs and select the one you want with the LIBRARY_PATH environment variable, a simple recipe, with two packages with each library in a different location, will be sufficient.

Building modules with linux kernel for custom flavor

I followed the instructions given in the link: http://blog.avirtualhome.com/how-to-compile-a-new-ubuntu-11-04-natty-kernel/ for building a custom kernel and booting it. Everything works fine, except that when building it, I used the option skipmodule=true (as given in this link), so I guess the modules are not built for this kernel. So I have two questions:
How do I build only the modules for my flavor, now that I have the rest of the kernel built? 'make modules' will build it for generic flavor only, if I'm not wrong.
Also does it require me to build the entire kernel source, 'fakeroot debian/rules binary-i5' (i5 is my custom falvor), each time I make a change to one of my modules?
Thanks.
1) To build a linux kernel module for a specific kernel from the module source directory do:
make -C {path-to-kernel-source} M=`pwd` modules
The -C option tells is used to point to the kernel source tree where it finds the kernel's top-level Makefile. The M=`pwd` option points it to the module source directory, where it builds the 'modules' target.
2) Nope, its not necessary to build the source kernel. Either having the kernel source tree or the kernel headers suffice.

Automating Solaris custom software deployment and configuration for multiple nodes

Essentially, the question I'd like to ask is related to the automation of software package deployments on Solaris 10.
Specifically, I have a set of software components in tar files that run as daemon processes after being extracted and configured in the host environment. Pretty much like any server side software package out there, I need to ensure that a list of prerequisites are met before extracting and running the software. For example:
Checking that certain users exists, and they are associated with one or many user groups. If not, then create them and their group associations.
Checking that target application folders exist and if not, then create them with pre-configured path values defined when the package was assembled.
Checking that such folders have the appropriate access control level and ownership for a certain user. If not, then set them.
Checking that a set of environment variables are defined in /etc/profile, pointed to predefined path locations, added to the general $PATH environment variable, and finally exported into the user's environment. Other files include /etc/services and /etc/system.
Obviously, doing this for many boxes (the goal in question) by hand will certainly be slow and error prone.
I believe a better alternative is to somehow automate this process. So far I have thought about the following options, and discarded them for one reason or another.
Traditional shell scripts. I've only troubleshooted these before, and I don't really have much experience with them. These would be my last resort.
Python scripts using the pexpect library for analyzing system command output. This was my initial choice since the target Solaris environments have it installed. However, I want to make sure that I'm not reinveting the wheel again :P.
Ant or Gradle scripts. They may be an option since the boxes also have Java 1.5 enabled, and the fileset abstractions can be very useful. However, they may fall short when dealing with user and folder permissions checking/setting.
It seems obvious to me that I'm not the first person in this situation, but I don't seem to find a utility framework geared towards this purpose. Please let me know if there's a better way to accomplish this.
I thank you for your time and help.
Most of those steps sound like things handled by use of a packaging system to install your package. On Solaris 10, that would be the SVR4 packaging system included with the OS.