When to modify Poky vs creating a new distro - yocto

Even though I can technically add .bbappend files to a custom layer that change the behavior of the .bb files in meta-poky, I'm not certain that this is good practice?
For example, I want to use networkd to to configure eth0 to use DHCP, and bridge all remaining network interfaces (eth* and wlan*) together with a static IP. This is really easy to do by adding a systemd_%.bbappend that installs additional networkd unit files. However, I'd be modifying Poky for all boards using systemd. I could make the modification machine specific, but then I'd have to do it for each new custom board.

Always create your own distro. Poky can and will change between releases because it is primarily designed to be a testbed for QA.

Also do try to avoid bbappends if at all possible. They make it more difficult to create a picture of the whole recipe in your head, particularly if you start by looking at the recipe itself, and don't know that there are also bbappends in other layers that modify it.

Related

How to keep the Yocto workspace minimal when building for multiple boards?

I would like to know what the best practice is to keep the Yocto workspace as small as possible. Currently, I am creating a separate build folder for every board. I wonder if there is a better way to reuse as many packages as possible. Can you just build in the same folder?
If you base on the same core yocto system you can use "distro" feature.
What I was attempting to convey to you is that you should move your "distro"
parameters into the "machine" config. Then just build for multiple machines.
If you want to have multiple image types, just create multiple image recipes,
and build them in the same directory and save disk space.
You can use the whole build tree for multiple machines. Changing DISTRO has
lots of extra effects on packages, in the past it wasn't possible at all
without wiping tmp, and it still is not a nice thing to do, as you have
discovered for yourself now.
You'd invoke your build like this:
for machine in apple-pie orange-pie banana-pie
do
MACHINE=$machine bitbake red-image blue-image green-image
done
You can build multiple images for the same machine in parallel, and you can
build multiple machine in the same environment (not in parallel yet,
unfortunately, but you probably don't have that many machines that this might
really help).
Indeed building everything in one folder is the preferred way for yocto (although it's not mentioned explicitly in the getting started doc). Yocto will take care of reusing as much as possible it can and this will avoid having duplicate files and build procedures.

Yocto - development build vs production build

I'd like to have a traditional way to build a development image and a production image, both pulling from the same recipes, but one differing with just debug tools/settings.
I've read that conditional includes in builds is not really how Yocto/Bitbake work, but thought perhaps the easiest way is to create a separate MACHINE folder, but perhaps have the debug work directory point to an additional layer with the debug stuff.
You can write two separate image recipes, and have them include what is needed (e.g. additional debug-oriented recipes).

Remove obviously unwanted cookbooks in chef

I have started using chef but a lot of the times I am going to use it on a CentOs VPS. This means that community cookbooks such windows,iis are of no need to me now and in the foreseeable. I want to be able to customize cookbooks that depend on these cookbooks and its recipes. Is that possible? And if so can you outline the process?
I understand that these cookbooks are meant to be reusable but I wish to customize my chef-repo for a versy specific need.
Hope there is an easy solution out there! Thanks
Of course you can customize it. Just delete those unneeded ones, and write new ones when existing doesn't fit your needs.
Note:
What you must have in your cookbooks directory are:
cookbooks you call in your rolls (or Vagrantfile, if you use vagrant), and
cookbooks previously mentioned ones depends on.
Now, how to find out what are dependencies:
metadata.rb file within each cookbook contains dependencies section.
recipes are included by invocation via "include_recipe". Sometimes metadata are not up to date so you'll have to track it this way.
If OpsCode cookbook doesn't suite your needs (even after changing the attributes), search the github for other sources. There's entire ecosystem around this. If none of this turns out to work write your own cookbooks. Use "knife" tool to create an empty cookbook (template). Command would be:
knife cookbook create COOKBOOK_NAME [--cookbook-path PATH_TO_YOUR_COOKBOOK_DYRECTORY]
Besides, rely on the opscode manuals. There's plenty of stuff there ;)

Version control of deliverables

We need to regularly synchronize many dozens of binary files (project executables and DLLs) between many developers at several different locations, so that every developer has an up to date environment to build and test at. Due to nature of the project, updates must be done often and on-demand (overnight updates are not sufficient). This is not pretty, but we are stuck with it for a time.
We settled on using a regular version (source) control system: put everything into it as binary files, get-latest before testing and check-in updated DLL after testing.
It works fine, but a version control client has a lot of features which don't make sense for us and people occasionally get confused.
Are there any tools better suited for the task? Or may be a completely different approach?
Update:
I need to clarify that it's not a tightly integrated project - more like extensible system with a heap of "plugins", including thrid-party ones. We need to make sure those modules-plugins works nicely with recent versions of each other and the core. Centralised build as was suggested was considered initially, but it's not an option.
I'd probably take a look at rsync.
Just create a .CMD file that contains the call to rsync with all the correct parameters and let people call that. rsync is very smart in deciding what part of files need to be transferred, so it'll be very fast even when large files are involved.
What rsync doesn't do though is conflict resolution (or even detection), but in the scenario you described it's more like reading from a central place which is what rsync is designed to handle.
Another option is unison
You should look into continuous integration and having some kind of centralised build process. I can only imagine the kind of hell you're going through with your current approach.
Obviously that doesn't help with the keeping your local files in sync, but I think you have bigger problems with your process.
Building the project should be a centralized process in order to allow for better control soon your solution will be caos in the long run. Anyway here is what I'd do.
Create the usual repositories for
source files, resources,
documentation, etc for each project.
Create a repository for resources.
There will be the latest binary
versions for each project as well as
any required resources, files, etc.
Keep a good folder structure for
each project so developers can
"reference" the files directly.
Create a repository for final buidls
which will hold the actual stable
release. This will get the stable
files, done in an automatic way (if
possible) from the checked in
sources. This will hold the real
product, the real version for
integration testing and so on.
While far from being perfect you'll be able to define well established protocols. Check in your latest dll here, generate the "real" versiĆ³n from latest source here.
What about embedding a 'what' string in the executables and libraries. Then you can synchronise the desired list of versions with a manifest.
We tend to use CVS id strings as a part of the what string.
const char cvsid[] = "#(#)INETOPS_filter_ip_$Revision: 1.9 $";
Entering the command
what filter_ip | grep INETOPS
returns
INETOPS_filter_ip_$Revision: 1.9 $
We do this for all deliverables so we can see if the versions in a bundle of libraries and executables match the list in a associated manifest.
HTH.
cheers,
Rob
Subversion handles binary files really well, is pretty fast, and scriptable. VisualSVN and TortoiseSVN make dealing with Subversion very easy too.
You could set up a folder that's checked out from Subversion with all your binary files (that all developers can push and update to) then just type "svn update" at the command line, or use TortoiseSVN: right click on the folder, click "SVN Update" and it'll update all the files and tell you what's changed.

Do you version "derived" files?

Using online interfaces to a version control system is a nice way to have a published location for the most recent versions of code. For example, I have a LaTeX package here (which is released to CTAN whenever changes are verified to actually work):
http://github.com/wspr/pstool/tree/master
The package itself is derived from a single file (in this case, pstool.tex) which, when processed, produces the documentation, the readme, the installer file, and the actual files that make up the package as it is used by LaTeX.
In order to make it easy for users who want to download this stuff, I include all of the derived files mentioned above in the repository itself as well as the master file pstool.tex. This means that I'll have double the number of changes every time I commit because the package file pstool.sty is a generated subset of the master file.
Is this a perversion of version control?
#Jon Limjap raised a good point:
Is there another way for you to publish your generated files elsewhere for download, instead of relying on your version control to be your download server?
That's really the crux of the matter in this case. Yes, released versions of the package can be obtained from elsewhere. So it does really make more sense to only version the non-generated files.
On the other hand, #Madir's comment that:
the convenience, which is real and repeated, outweighs cost, which is borne behind the scenes
is also rather pertinent in that if a user finds a bug and I fix it immediately, they can then head over to the repository and grab the file that's necessary for them to continue working without having to run any "installation" steps.
And this, I think, is the more important use case for my particular set of projects.
We don't version files that can be automatically generated using scripts included in the repository itself. The reason for this is that after a checkout, these files can be rebuild with a single click or command. In our projects we always try to make this as easy as possible, and thus preventing the need for versioning these files.
One scenario I can imagine where this could be useful if 'tagging' specific releases of a product, for use in a production environment (or any non-development environment) where tools required for generating the output might not be available.
We also use targets in our build scripts that can create and upload archives with a released version of our products. This can be uploaded to a production server, or a HTTP server for downloading by users of your products.
I am using Tortoise SVN for small system ASP.NET development. Most code is interpreted ASPX, but there are around a dozen binary DLLs generated by a manual compile step. Whilst it doesn't make a lot of sense to have these source-code versioned in theory, it certainly makes it convenient to ensure they are correctly mirrored from the development environment onto the production system (one click). Also - in case of disaster - the rollback to the previous step is again one click in SVN.
So I bit the bullet and included them in the SVN archive - the convenience, which is real and repeated, outweighs cost, which is borne behind the scenes.
Not necessarily, although best practices for source control advise that you do not include generated files, for obvious reasons.
Is there another way for you to publish your generated files elsewhere for download, instead of relying on your version control to be your download server?
Normally, derived files should not be stored in version control. In your case, you could build a release procedure that created a tarball that includes the derived files.
As you say, keeping the derived files in version control only increases the amount of noise you have to deal with.
In some cases we do, but it's more of a sysadmin type of use case, where the generated files (say, DNS zone files built from a script) have intrinsic interest in their own right, and the revision control is more linear audit trail than branching-and-tagging source control.