Let's say we have recipe 'A' and 'B' which each one of them install some binaries on the target image. But in run time of image, the binary is resulted of 'A' is depending on the existing of the binary resulted from 'B' . I can make both binaries exist if i just did
IMAGE_INSTALL_append = " A B"
And this works fine. But what i want is making recipe 'A' calls recipe 'B' in any case so the user doesn't need to be appear that 'A' needs 'B' to run on the image. for example, he does only
IMAGE_INSTALL_append = " A"
what should i do in recipe 'A' to make this effect ?
If B is a library, adding DEPENDS += "B" is enough.
If B is an application, you should instead add RDEPENDS_${PN} += "B" in the A recipe, in order to add a runtime dependency.
Related
I am developing a multiplatform project, so it is a mix of different boards with appropriate meta layers. My meta image layer contains some bbappend recipes that are board specific although I would like to stick to single image layer repository rather than having image layer repository for each board.
So is there any way to completely hide/ignore/disable specific bbappend files?
Example:
I have bblayers for var-som-* boards. For such boards I have recipes-kernel/linux-variscite_%.bbapend, so building for var-som-* boards is fine, but problem happens when I build for example for raspberry. Having variscite layer (as well as all freescale set) adds a lot of things to the image I don't want, so I am removing variscite and freescale layers and it creates No recipes available for: recipes-kernel/linux-variscite_%.bbappend error.
Luckily this question has already been answered:
Yocto Dunfell error 'No recipes available for' with multiple machines in single custom meta layer
So for those, who faced the same problem here is quick summary:
Within your image meta repository create a folder structure that DOES NOT match BBFILES directive.
Move your .bbappend recipes there.
Include recipes on per-meta-layer basis using BBFILES_DYNAMIC directive.
Example:
# thats default and commonly used way to import recipes from your image
BBFILES += "${LAYERDIR}/recipes-*/*/*.bb \
${LAYERDIR}/recipes-*/*/*.bbappend"
# "hide" recipes "deeper" within folder structure
# so they won't be included by ${LAYERDIR}/recipes-*/*/*.bbappend
#
BBFILES_DYNAMIC += "\
meta-atmel:${LAYERDIR}/dynamic-layers/meta-atmel/recipes-*/*/*.bbappend \
meta-atmel:${LAYERDIR}/dynamic-layers/meta-atmel/recipes-*/*/*/*.bbappend \
"
I want the tar.bz image to be included in the wic image, which is an installer wic image
I have:
IMAGE_FSTYPES += "tar.bz2"
do_image_wic[depends] += "${IMAGE_BASENAME}:do_image_tar"
IMAGE_BOOT_FILES += "${IMAGE_BASENAME}-${MACHINE}.tar.bz2;upgrade.bz2"
so the tar.bz is made first, but... it is not deployed at the point the wic images is made, it is in:
build_output/work/device-type-linux/yocto-image-release/1.0-r0/deploy-yocto-image-release-image-complete/yocto-image-release-device-type-20190611214913.rootfs.tar.bz2
It won't appear in the deploy dir until after image yocto-image-release:do_deploy which naturally occurs after the wic is built (which now fails).
Is there a safe way to access that for the wic imager?
I'm guessing work-shared won't be any good https://www.yoctoproject.org/docs/latest/ref-manual/ref-manual.html#structure-build-work-shared
Is the better way to have a new installer.bb which depends on the yocto-image-release.bb:do_deploy so it can find the pieces and then make its own wic?
One solution seems to involve BBCLASSEXTEND so that I can build bitbake yocto-image-release and bitbake yocto-image-installer by amending the recipe (or a parent class) to include:
BBCLASSEXTEND += "installer"
DEPENDS_installer += "${BPN}"
and in installer.bbclass:
CLASSOVERRIDE = "installer"
and then I can override values with _installer suffix, although there will likely be a lot of work neutralising most of the configuration and methods of the native recipe, because (for now) all I want to build is a wic with the systems own kernel.
No doubt later it will have its own kernel configuration and initramfs anyway as the installer specialises.
This seems nice as there are a variety of images (-dev, -debug, etc) all of which may want an installer. But I still wonder if -installer couples the two too tightly
We are currently using a stereotype with an attached shape script to show that files are linked to an element
However, going this route means that our users cannot use another stereotype, or it will overwrite (Even if multiple stereotypes can be applied, only one will be shown and only one shape script will be applied)
I tried using the "A" icon for when a linked document has been created for an element by modifying the style property of the element, but setting MDoc=1 without a linked document will not show the icon.
What would be an effective way of showing there are files linked to a document without using stereotypes(if any)?
You are out of luck here. The Link to Element Feature on Notes works for many things, but not for the related files.
In case you could link <<files>> stereotyped Notes to the elements and run a batch script that looks into the Related/Files and fills them in the Notes. Basically that would be something like:
for dia in all diagrams:
for diaobj in dia.diagramobjects:
obj = rep.GetElementByID(diaObj.ElementID)
if obj.Type == "Note" and obj.Stereotype == "files":
con = obj.connectors.getAt(0) # assume there's only one connected
ident = con.clientId
if ident == obj.ElementId: ident = obj.sourceId
fObj = rep.GetElementByID(ident) # element connected to the note
# parse fObj's files and write them as string to obj's Note attribute
I have the following snippet, to copy a file as-is to the build dir:
for m in std_mibs:
print("Copying", m)
bld(name = 'cpstdmib',
rule = 'cp -f ${SRC} ${TGT}',
#source = m + '.mib',
source = bld.path.make_node(m + '.mib'), # <-- section 5.3.3 of the waf book
target = bld.path.get_bld().make_node(m + '.mib')
)
I see that this rule, though hit (from the print), the copy doesnt seem to be happening!
I also changed the source to use the make_node as shown, in an example in the section 5.3.3 of the waf book, still no luck! Am I missing something obvious here!?
Also, I have some rules after this, which rely on the copied files, and I tried adding
an intervening
bld.add_group()
I hope that the sequencing will work, if this copy succeeds
If you run the rule once, it will not be run again until source is updated. This is true even if the target is deleted, for instance (which is probably how you were testing.)
If you want to recopy if the target is deleted, you will need always=True, or you'll need to check for existence and set target.sig = None.
Two alternatives:
features="subst" with is_copy=True:
bld(features='subst', source='wscript', target='wscript', is_copy=True)
waflib.extras.buildcopy like this:
from waflib.extras import buildcopy
#...
def build(bld):
bld(features='buildcopy',buildcopy_source=['file'])
cp is not platform independent.
A task_gen object is created, which later will become a Task, that will be executed before process_sources. Don't expect an immediate effect.
Have a look into your out dir, there will be out/${TGT} (not exactly, but ${TGT} path relative to your top directory)
This is totally to be expected behaviour, since you do not want to modify your source tree when building.
Some IDE's like PyCharm offer ability to mark parts of source code with # TODO tags with further ability to locate all the tags later on.
Is there any way to convert them into "Issues" after commit was made to Butbucket or Github?
I find it might be very useful to create TODO's on the fly while writing code, so then other contributors may view them on online repository, like Bitbucket.
Bitbucket and Github have a lot of addons or "services", but I couldn't find similar functionality anywhere.
There is a cloud based solution called Todofy (https://todofy.org), it lists out all the todos in the repository, and keep tracking its state untill its finished (removed from code). It provides more features like adding deadline, reminders, assigning someone or bringing someone to discussion, labels etc.
Example comment with prettifiers (C++ style comment)
// TODO: something has to be done quickly #deadline: 1 week
// #assign: mebjas #priority: high
It has an option to auto create issue for it in Github.
I have created a node module to do exactly what you needed, to adapt it to your usage, you will have to create a package.json file, in which you mention the url of your github repository, and then you will have to create a .fixme-to-issue file and include your github credentials as well as the configuration for the annotations (if the module finds a //TODO, it creates an issue with label todos for example).
to install the module:
npm install -g fixme-to-issue
Here is a pretty strait forward python script. It uses Githubpy to interact with github. It goes through your current directory tree and grabs the given files (in this case *.cpp and *.h). It then goes through each one of those files finds any #TODO and creates a github issue. It then changes that line to be TODO [GH]:
gh = GitHub(username=user, password=password)
path = '.'
configfiles = [os.path.join(dirpath, f)
for dirpath, dirnames, files in os.walk(path)
for extension in extensions
for f in fnmatch.filter(files, ["*.cpp", "*.h")]
import fileinput
for fileName in configfiles:
count = 0
search = fileinput.input(fileName, inplace = 1)
for line in search: #TODO [GH12]: to
line = line.rstrip() # remove '\n' at end of line
if re.match("(.*)(\#)TODO:(.*)", line):
todoInfo= re.sub("(.*)(\#)TODO:\s","", line)
fileNameShort = re.sub("\.\/","", fileName)
subject = fileNameShort+":"+str(count)+" " + todoInfo
# make url that can link to specific place in file
url = "https://github.com/"+projectAccount + "/" + project + "/blob/master/" + fileNameShort + "#L" + str(count)
r = gh.repos(projectAccount)(project).issues.post(title=subject, body=url)
line = re.sub("(\#)TODO:","#TODO [GH"+str(r.number)+"]:", line)
print(line) #write line back to file
count = count + 1
You can access the who script on my github. https://github.com/jmeed/todo2github