Apply OpenEmbedded patch for Buildroot - buildroot

How I can modify patch created for OpenEmbedded to BuildRoot? What is the difference between them?

What does this patch do? If it's a patch that modifies the source code of a given software component then it should be easy to adapt. If on the other hand it's a patch that adds a new package to OpenEmbedded, or modifies some OpenEmbedded recipe, then it cannot be used for Buildroot: OpenEmbedded and Buildroot are two quite different build systems.
That said, it's impossible to answer your question precisely if you don't provide more details. Could you point the specific patch you're refering to?

Related

What are the different functions available in the yocto just like - do_compile_append?

I am looking for the different functions available in the yocto just like do_compile_append()?
Also, when is the do_compile_append() is invoked by the yocto. I am looking into the in depth manual documentation on such functions. Are there any examples available?
Yocto has extensive documentation. Tasks and dependencies in general as well as the standard tasks like do_compile() are defined in the Reference Manual. The function override style ("_append" and "_prepend") is explained in the Bitbake Manual. What exactly happens during a do_compile depends on the specific recipe of course.
The Mega Manual contains all of the Yocto documentation in a single html file -- very useful for quick searches but it's already far longer than most novels so not suitable for reading front-to-back.

How to make "prereqs" of CPAN::Meta::Spec require a distribution instead of a package?

I'm researching about how to package some of my Perl apps and better manage their dependencies to make distribution easier for me and my customers, which most likely doesn't include uploading to CPAN at all. Instead, I would provide custom repos if necessary or, more likely, access to SCMs like Subversion.
CPAN::Meta::Spec seems to provide what I need to describe my apps, their dependencies and even where to get them from, but what I'm wondering is about the level of detail of pre-requisites. The spec contains the following sentence:
The set of relations must be specified as a Map of package names to version ranges.
Requiring packages seems a little too low level for my needs, I would prefer requiring distributions instead. Pretty much the level (from my understanding) tools like Maven and Gradle work at, e.g. Apache Commons Lang vs. Apache Commons IO etc. instead of individual classes like org.apache.commons.lang3.AnnotationUtils or org.apache.commons.io.ByteOrderMark. OTOH, the example in the docs contains the following lines:
requires => {
'perl' => '5.006',
'File::Spec' => '0.86',
'JSON' => '2.16',
},
The line containing perl doesn't look like a package to me and I didn't find some package perl or perl.pm anywhere on my system. Seems to me like that is handled differently to the other things of the example.
I have a system wide folder containing e.g. some utility packages, which seems comparable to some abstract perl to me. That folder should get defined as one distribution, maintain a version number for all of the packages in that folder and therefore should allow other apps to require that whole thing. If I understand the docs correctly, I would need to create not only the META.yml in the folder, but additionally some e.g. sysutils.pm containing package sysutils; and defining some version.
Is there some way to avoid creating that file and really require the distribution itself only?
The META.yml already contains a name and version on it's own, so looks like some abstract thing one could require in theory. I don't see the need of adding an additional .pm-file representing the distribution itself only to allow require to work. It wouldn't contain any business logic in my case.
Thanks!
That's really not what you want to do. You want to pre-req what you actually require. So, for example, if you need File::Spec, that's what you need, regardless of whether it comes from perl core or from a separate CPAN distribution.
I've seen cases where certain modules have moved from CPAN to core, or vice versa. By requiring the module directly, you don't need to ship new releases of your dependent distributions simply because someone you depend on changed their method of distribution.
I've also seen cases where certain modules are split off from their original distributions when it was determined they were valuable as standalone modules. Depending on the module means that you no longer drag in a bunch of other modules for a simple dependency.
What you're more or less looking for is akin to the Task::* modules. No real logic in most of them, just a list of further dependencies.
The Perl dependency system works entirely on package names, on multiple levels. When a CPAN distribution is uploaded, each package within is indexed by PAUSE, which also checks if the uploader has permissions for that package and that the package has a newer version than the currently indexed package. None of these checks care about the distribution as a whole (though the indexer does do other checks at that level).
Then, when a CPAN client sees a dependency, or you tell it to install something, it checks the index for that package name, which tells it what distribution release to install. If it depends on a certain version, that is checked against the $VERSION declared in that package if you have it installed; whereas once a distribution is installed, its "version" is no longer tracked. The distribution level is almost entirely meaningless except that it is what is ultimately downloaded and installed to satisfy these dependencies. This is important, because modules can and do move between distributions, maintaining their version increments, and the package index will always tell you which distribution to get the version you need.
As you noticed, the perl dependency is weird. It's a special case that has been there forever, as a convention to declare what version of Perl you require, you declare a runtime requirement of perl. It is not an indexed module, and every CPAN client and other consumer of CPAN metadata special cases this to either ignore it or treat it as a minimum Perl version, rather than something that can be installed. There's no way to extend this to work for distributions in general, and it would be a bad idea to try.
As an additional note, the CPAN meta spec is a specification for the file named META.json included in CPAN distributions (META.yml is the legacy version), but this file is automatically generated by your authoring tool. It should never be manually created, though you may have your authoring tool manually add certain keys (in which case reading the spec is important), including prereqs. See neilb's blog post for how to specify dependencies for various authoring tools, which will then transpose these into the generated META file, and also how to use cpanfiles to specify dependencies in general.

Do ELF object files store file system paths for dependent libraries?

When we link executables with ld, we give a list of libraries that the executable depends upon. Is this the only source of the location information for these libraries, or is some information about the preferred version of dependent libraries stored as metadata in the object files?
The specific issue is this: If I link two dependent libraries lA and lB, which both depend upon a third library lC, and I place references to these libraries on the link line. It appears that C++ class methods in lA are calling into a different version of lC than class methods in lB. How is this possible? I know this from looking at a backtrace in gdb.
They might. DT_RPATH is used for symbol resolution. They also include the full object name, that might include a version number, and if the library uses versioning correctly, then the symbols don't actually collide with each other.
I can send you to my blog for a couple of insight into DT_RPATH and DT_SONAME:
https://blog.flameeyes.eu/2010/06/the-why-and-how-of-rpath/
https://blog.flameeyes.eu/2009/10/a-shared-library-by-any-other-name/
https://blog.flameeyes.eu/2010/10/linkers-and-names/

Find Simulink Requirements and get their contents

I find the requirements by searching for subsystems, as there seems to be no special block for requirements.
find_system(gcs,'LookUnderMasks','none','FollowLinks','off','BlockType','SubSystem','LinkStatus','none')
I get all the subsystems, including :
'test_simulinkmodel/SLVnV Internal Requirement Sub Block Name 1'
Is there some other way than to look for this (default?)string?
Also, when I know the path, is there some way to get the contents (Titles, descriptions?)
Which release are you using? In the latest release (R2013a), there is the System Requirements block. You can also generate (and customize) a requirements report, which should also work in earlier releases (I remember using it in R2011a for example).
In the R2017b release, MathWorks released a new product called Simulink Requirements that lets you author and manage requirements within Simulink. You can link requirements to design objects in Simulink as well as to test cases and their results.

Is there a 3 way merger tool that “understands” common refactoring?

When a simple refactoring like “rename field” has been done on one branch it can be very hard to merge the changes into the other branches. (Extract method is much harder as the merge tools don’t seem to match the unchanged blocks well)
Now in my dreams, I am thinking of a tool that can record (or work out) what well defined refactoring operations have been done on one branch and then “replay” them on the other branch, rather than trying to merge every line the refactoring has affected.
see also "Is there an intelligent 3rd merge tool that understands VB.NET" for the other half of my pain!
Also has anyone try something like MolhadoRef (blog article about MolhadoRef and Refactoring-aware SCM), This is, in theory, refactoring-aware source control.
You could use coccinelle to do the same kind of refactoring operations on different branches. It will not record or figure out what is being done by itself, you have to explicitly tell it what to do, but other than that it will more or less effortlessly do the same refactoring on as many branches you point it to.
This tool have been used in the linux kernel for updating API usage etc.
To quote from its web page:
"Coccinelle is a program matching and
transformation engine which provides
the language SmPL (Semantic Patch
Language) for specifying desired
matches and transformations in C code."
Darcs supports a 'token replace' operation in a commit, which replaces all instances of one token with another, and merges as you'd want it to.
Araxis Merge doesn't understand common refactoring but it is the only three way merge tool that I've used. It is available for both the Mac and Windows, and it supports an Automation API so I would imagine that you could do what you want with that if you were so inclined. For the record I have no connection with Araxis other than I've used their product.
Plastic SCM (www.plasticscm.com) 3-way merge tool implements Xmerge which is the only one able to assist you merging code that has been moved.
There are now some better merge tools (for example SemanticMerge) that are based on language parsing, designed to deal with code that has been moved and modified. JetBrains (the create of ReShaper) has just posted a blog on this.
There has been lots of research on this over the years, at last some products are coming to market.
In Linux you can use Meld or in Windows Winmerge.
In any case, both tools only "understand" about lines of text. Refactoring requires a way of understanding the code, which is beyond any merging/comparing tool that I known.