This is the first time for me write the bb file, so please give me some help.
I can fetch the http tarball from external network, after I put it into the local source mirror directory, disable the external network and run the bb file, it works well. But when I tried to fetch a git source tarball, and do everything as before, the bb file failed to fetch the git source tarball from the source mirror after I disable the external network.
ERROR: Task 587 (/$PATH/******.bb, do_fetch) failed with exit code '1'
NOTE: Tasks Summary: Attempted 402 tasks of which 382 didn't need to
be rerun and 1 failed.
The following is my bb file:
SRCBRANCH = "********"
SRCREV = "AUTOINC"
SRC_URI = "git://***************.git;branch=${SRCBRANCH};protocol=https"
LIC_FILES_CHKSUM = "file://LICENSE;beginline=4;endline=16;md5=**********"
SRC_URI[md5sum] = "***************"
SRC_URI[sha256sum] = "***************"
S = "${WORKDIR}/git"
I can guess that as you use AUTOINC, the cause of your error can be checksum mismatch, but as you haven't provided error message from your do_fetch log, I cannot say for sure. You can find it by the path
build/tmp/work/one_of_directories/name_of_your_recipe/version/tmp/log.do_fetch
Related
I have a recipe that looks basically like this :
SUMMARY = "SomeLibrary"
LICENSE = "Apache-2.0"
LIC_FILES_CHKSUM = "file://LICENSE;md5=3b83ef96387f14655fc854ddc3c6bd57"
SRC_URI += "git://gitlab.com/some_library/some-library.git;protocol=https;nobranch=1"
SRCREV = "${PV}"
S = "${WORKDIR}/git"
inherit autotools pkgconfig
It builds successfully with bitbake some-library, and I can see there is a git2/gitlab.com.some_library.some-library.git/ directory and a git2/gitlab.com.some_library.some-library.git.done file in my downloads folder (the one DL_DIR point to).
My understanding is that if I then immediately run bitbake -c cleansstate some-library && bitbake some-library, given that there is no change in the recipe, bitbake shouldn't need to download anything (it already has everything it needs). In practice, if I turn off my network connection or add BB_NO_NETWORK="1" to my local.conf, I get the following error :
Initialising tasks: 100% |################################################################| Time: 0:00:01
Sstate summary: Wanted 12 Found 4 Missed 8 Current 251 (33% match, 96% complete)
NOTE: Executing Tasks
ERROR: some-library-v2.3.0-r0 do_fetch: Bitbake Fetcher Error: NetworkAccess('https://gitlab.com/some_library/some-library.git', 'git -c core.fsyncobjectfiles=0 ls-remote "https://gitlab.com/some_library/some-library.git" ')
ERROR: Logfile of failure stored in: /home/myusername/work/builddir/tmp/work/aarch64-poky-linux/some-library/v2.3.0-r0/temp/log.do_fetch.116252
ERROR: Task (/home/myusername/work/builddir/../../layers/meta-mymeta/recipes-core/some-library/some-library_v2.3.0.bb:do_fetch) failed with exit code '1'
NOTE: Tasks Summary: Attempted 806 tasks of which 804 didn't need to be rerun and 1 failed.
Summary: 1 task failed:
/home/myusername/work/builddir/../../layers/meta-mymeta/recipes-core/some-library/some-library_v2.3.0.bb:do_fetch
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
Why is that ? How do other recipes avoid this pitfall ? (when I build my image, this recipe seems to be the only one trying to fetch things from the network, which suggests to me that I'm doing something wrong here)
EDIT :
What really puzzles me is that bitbakes seems to behave differently with recipes other than my own. For example, the recipe for can-utils located at meta-openembedded/meta-oe/recipes-extended/socketcan/can-utils_git.bb looks like this:
SUMMARY = "Linux CAN network development utilities"
LICENSE = "GPLv2 & BSD-3-Clause"
LIC_FILES_CHKSUM = "file://include/linux/can.h;endline=44;md5=a9e1169c6c9a114a61329e99f86fdd31"
DEPENDS = "libsocketcan"
SRC_URI = "git://github.com/linux-can/${BPN}.git;protocol=https;branch=master"
SRCREV = "da65fdfe0d1986625ee00af0b56ae17ec132e700"
PV = "2020.02.04"
S = "${WORKDIR}/git"
inherit autotools pkgconfig
which is very similar, but when I set BB_NO_NETWORK="1" in my local.conf and run bitbake -c cleansstate can-utils && bitbake can-utils I get Tasks Summary: Attempted 842 tasks of which 822 didn't need to be rerun and all succeeded.
This works for me:
After configuring the project, add the following lines to build/conf/site.conf file:
# Build offline
SOURCE_MIRROR_URL ?= "file:///path/to/oe-downloads"
INHERIT += " own-mirrors"
BB_GENERATE_MIRROR_TARBALLS = "1"
BB_NO_NETWORK = "1"
After that, it might be necessary to build project once when online.
After every re-configuration (different build options) the site.conf is overwritten, so I created a script to add these lines after re-configuration.
I believe I found the issue.
If I replace ${PV} (which was equal to v2.3.0 here) by the hash associated to that tag, then the issue stops happening.
If I interpret this correctly, it means that bitbake is able to tell if SRCREV is a hash or a tag, and that if it is a tag then do_fetch will always run git ls-remote to make sure that the tag has not been moved.
Fetcher failure for URL: 'https://gitlab.linphone.org/BC/public/external/polarssl.git'. Missing SRC_URI checksum. any pointers is appreciated. able to browse the link
When fetching a project from git there is no obligation for checksum.
The checksum specification is needed for files sources (.tar.gz, ...etc).
So, I create a simple recipe for the URL you specified:
SUMMARY = "PolarSSL recipe"
LICENSE = "CLOSED"
PROTOCOL = "https"
BRANCH = "linphone-1.4"
GIT_SRC = "gitlab.linphone.org/BC/public/external/polarssl.git"
SRC_URI = "git://${GIT_SRC};protocol=${PROTOCOL};branch=${BRANCH}"
SRCREV = "9864c92b71b81dd1dda885eae108cc3fc9a0cf4b"
S = "${WORKDIR}/git"
inherit autotools
I got SRCREV value from the last commit on the linphone-1.4 branch.
I didn't go further with specifying the corrent LICENSE and do_install.
I tested this recipe with a zeus build.
my main.tf file looks like below
module "sql_vms" {
source = "git::https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com/sampleuser/my_code/_git/terraform_modules.git//compute"
rg_name = var.resource_group_name
location = module.resource_group.external_rg_location
vnet_name = var.virtual_network_name
subnet_name = var.sql_subnet_name
app_nsg = var.application_nsg
vm_count = var.count_vm
base_hostname = var.sql_host_basename
sto_acc_suffix = var.storage_account_suffix
vm_size = var.virtual_machine_size
vm_publisher = var.virtual_machine_image_publisher
vm_offer = var.virtual_machine_image_offer
vm_sku = var.virtual_machine_image_sku
vm_img_version = var.virtual_machine_image_version
username = var.username
password = var.password
}
The modules are in same repo, technically not right but for now, I want to use the Azure repo which has a terraform module and creates multiple VM's from TF modules.
I get the error like below
2020-08-23T02:27:38.1439274Z [command]/usr/local/bin/terraform init -backend-config=storage_account_name=stoaccautomationnonprod -backend-config=container_name=stoacccon01nonprod -backend-config=key=nonprod.tfstate -backend-config=resource_group_name=automation -backend-config=arm_subscription_id=cc800481-b728-4d8f-81be-e80b955d346e -backend-config=arm_tenant_id=*** -backend-config=arm_client_id=*** -backend-config=arm_client_secret=***
2020-08-23T02:27:38.1441494Z [0m[1mInitializing modules...[0m
2020-08-23T02:27:38.1442513Z Downloading git::https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com/sampleuser/my_code/_git/terraform_modules.git for sql_vms...
2020-08-23T02:27:38.1443347Z [31m
2020-08-23T02:27:38.1444113Z [1m[31mError: [0m[0m[1mFailed to download module[0m
2020-08-23T02:27:38.1444608Z
2020-08-23T02:27:38.1445408Z [0mCould not download module "sql_vms" (main.tf:1) source code from
2020-08-23T02:27:38.1446189Z "git::https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com/sampleuser/my_code/_git/terraform_modules.git":
2020-08-23T02:27:38.1446845Z error downloading
2020-08-23T02:27:38.1447746Z 'https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com/sampleuser/my_code/_git/terraform_modules.git':
2020-08-23T02:27:38.1448669Z /usr/bin/git exited with 128: Cloning into '.terraform/modules/sql_vms'...
2020-08-23T02:27:38.1449408Z fatal: could not read Password for
2020-08-23T02:27:38.1450157Z 'https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com':
2020-08-23T02:27:38.1450684Z terminal prompts disabled
2020-08-23T02:27:38.1450936Z
2020-08-23T02:27:38.1451324Z [0m[0m
2020-08-23T02:27:38.1451716Z [31m
2020-08-23T02:27:38.1452230Z [1m[31mError: [0m[0m[1mFailed to download module[0m
2020-08-23T02:27:38.1452525Z
2020-08-23T02:27:38.1453109Z [0mCould not download module "sql_vms" (main.tf:1) source code from
2020-08-23T02:27:38.1454386Z "git::https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com/sampleuser/my_code/_git/terraform_modules.git":
2020-08-23T02:27:38.1454903Z error downloading
2020-08-23T02:27:38.1456723Z 'https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com/sampleuser/my_code/_git/terraform_modules.git':
2020-08-23T02:27:38.1457540Z /usr/bin/git exited with 128: Cloning into '.terraform/modules/sql_vms'...
2020-08-23T02:27:38.1458063Z fatal: could not read Password for
2020-08-23T02:27:38.1458813Z 'https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com':
2020-08-23T02:27:38.1459301Z terminal prompts disabled
2020-08-23T02:27:38.1459470Z
2020-08-23T02:27:38.1459765Z [0m[0m
2020-08-23T02:27:38.1459896Z
2020-08-23T02:27:38.1496541Z ##[error]Terraform command 'init' failed with exit code '1'.: Failed to download module | Failed to download module
2020-08-23T02:27:38.1786437Z ##[section]Finishing: terraform init
I was thinking to use SSH instead of HTTPS with PAT Token, unfortunately I couldn't figure it out how to add public key on Microsoft agent?
Please assist
When using the SSH key to pull the Terraform modules, you need to generate the SSH key yourself. And then create an SSH Key in the DevOps:
And then you need to upload the private key in the pipeline variable group as secure files and add the step to install the SSH in your agent. The Install SSH in an agent job like this:
Get more details about use SSH to pull the remote Terraform module.
In the book "Embedded Linux Systems with the Yocto Project", Chapter 4 contains a sample called "HelloWorld - BitBake style". I encountered a bunch of problems trying to get the old example working against the "Sumo" release 2.5.
If you're like me, the first error you encountered following the book's instructions was that you copied across bitbake.conf and got:
ERROR: ParseError at /tmp/bbhello/conf/bitbake.conf:749: Could not include required file conf/abi_version.conf
And after copying over abi_version.conf as well, you kept finding more and more cross-connected files that needed to be moved, and then some relative-path errors after that... Is there a better way?
Here's a series of steps which can allow you to bitbake nano based on the book's instructions.
Unless otherwise specified, these samples and instructions are all based on the online copy of the book's code-samples. While convenient for copy-pasting, the online resource is not totally consistent with the printed copy, and contains at least one extra bug.
Initial workspace setup
This guide assumes that you're working with Yocto release 2.5 ("sumo"), installed into /tmp/poky, and that the build environment will go into /tmp/bbhello. If you don't the Poky tools+libraries already, the easiest way is to clone it with:
$ git clone -b sumo git://git.yoctoproject.org/poky.git /tmp/poky
Then you can initialize the workspace with:
$ source /tmp/poky/oe-init-build-env /tmp/bbhello/
If you start a new terminal window, you'll need to repeat the previous command which will get get your shell environment set up again, but it should not replace any of the files created inside the workspace from the first time.
Wiring up the defaults
The oe-init-build-env script should have just created these files for you:
bbhello/conf/local.conf
bbhello/conf/templateconf.cfg
bbhello/conf/bblayers.conf
Keep these, they supersede some of the book-instructions, meaning that you should not create or have the files:
bbhello/classes/base.bbclass
bbhello/conf/bitbake.conf
Similarly, do not overwrite bbhello/conf/bblayers.conf with the book's sample. Instead, edit it to add a single line pointing to your own meta-hello folder, ex:
BBLAYERS ?= " \
${TOPDIR}/meta-hello \
/tmp/poky/meta \
/tmp/poky/meta-poky \
/tmp/poky/meta-yocto-bsp \
"
Creating the layer and recipe
Go ahead and create the following files from the book-samples:
meta-hello/conf/layer.conf
meta-hello/recipes-editor/nano/nano.bb
We'll edit these files gradually as we hit errors.
Can't find recipe error
The error:
ERROR: BBFILE_PATTERN_hello not defined
It is caused by the book-website's bbhello/meta-hello/conf/layer.conf being internally inconsistent. It uses the collection-name "hello" but on the next two lines uses _test suffixes. Just change them to _hello to match:
# Set layer search pattern and priority
BBFILE_COLLECTIONS += "hello"
BBFILE_PATTERN_hello := "^${LAYERDIR}/"
BBFILE_PRIORITY_hello = "5"
Interestingly, this error is not present in the printed copy of the book.
No license error
The error:
ERROR: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb: This recipe does not have the LICENSE field set (nano)
ERROR: Failed to parse recipe: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
Can be fixed by adding a license setting with one of the values that bitbake recognizes. In this case, add a line onto nano.bb of:
LICENSE="GPLv3"
Recipe parse error
ERROR: ExpansionError during parsing /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
[...]
bb.data_smart.ExpansionError: Failure expanding variable PV_MAJOR, expression was ${#bb.data.getVar('PV',d,1).split('.')[0]} which triggered exception AttributeError: module 'bb.data' has no attribute 'getVar'
This is fixed by updating the special python commands being used in the recipe, because #bb.data was deprecated and is now removed. Instead, replace it with #d, ex:
PV_MAJOR = "${#d.getVar('PV',d,1).split('.')[0]}"
PV_MINOR = "${#d.getVar('PV',d,1).split('.')[1]}"
License checksum failure
ERROR: nano-2.2.6-r0 do_populate_lic: QA Issue: nano: Recipe file fetches files and does not have license file information (LIC_FILES_CHKSUM) [license-checksum]
This can be fixed by adding a directive to the recipe telling it what license-info-containing file to grab, and what checksum we expect it to have.
We can follow the way the recipe generates the SRC_URI, and modify it slightly to point at the COPYING file in the same web-directory. Add this line to nano.bb:
LIC_FILES_CHKSUM = "${SITE}/v${PV_MAJOR}.${PV_MINOR}/COPYING;md5=f27defe1e96c2e1ecd4e0c9be8967949"
The MD5 checksum in this case came from manually downloading and inspecting the matching file.
Done!
Now bitbake nano ought to work, and when it is complete you should see it built nano:
/tmp/bbhello $ find ./tmp/deploy/ -name "*nano*.rpm*"
./tmp/deploy/rpm/i586/nano-dbg-2.2.6-r0.i586.rpm
./tmp/deploy/rpm/i586/nano-dev-2.2.6-r0.i586.rpm
I have recently worked on that hands-on hello world project. As far as I am concerned, I think that the source code in the book contains some bugs. Below there is a list of suggested fixes:
Inheriting native class
In fact, when you build with bitbake that you got from poky, it builds only for the target, unless you mention in your recipe that you are building for the host machine (native). You can do the latter by adding this line at the end of your recipe:
inherit native
Adding license information
It is worth mentioning that the variable LICENSE is important to be set in any recipe, otherwise bitbake rises an error. In our case, we try to build the version 2.2.6 of the nano editor, its current license is GPLv3, hence it should be mentioned as follow:
LICENSE = "GPLv3"
Using os.system calls
As the book states, you cannot dereference metadata directly from a python function. Which means it is mandatory to access metadata through the d dictionary. Bellow, there is a suggestion for the do_unpack python function, you can use its concept to code the next tasks (do_configure, do_compile):
python do_unpack() {
workdir = d.getVar("WORKDIR", True)
dl_dir = d.getVar("DL_DIR", True)
p = d.getVar("P", True)
tarball_name = os.path.join(dl_dir, p+".tar.gz")
bb.plain("Unpacking tarball")
os.system("tar -x -C " + workdir + " -f " + tarball_name)
bb.plain("tarball unpacked successfully")
}
Launching the nano editor
After successfully building your nano editor package, you can find your nano executable in the following directory in case you are using Ubuntu (arch x86_64):
./tmp/work/x86_64-linux/nano/2.2.6-r0/src/nano
Should you have any comments or questions, Don't hesitate !
I just can't figure out what's going on with my RSync. I'm running RSync on RHEL5, ip = xx.xx.xx.97. It's getting files from RHEL5, ip = xx.xx.xx.96.
Here's what the log (which I specified on the RSync command line) shows on xx.97 (the one requesting the files):
(local time)
2015/08/30 13:40:01 [17353] #ERROR: auth failed on module tomcat_backup
2015/08/30 13:40:01 [17353] rsync error: error starting client-server protocol (code 5) at main.c(1530) [receiver=3.0.6]
Here's what the log(which is specified in the rsyncd.conf file) shows on xx.96 (the one supplying the files):
(UTC time)
2015/08/30 07:40:01 [8836] name lookup failed for xx.xx.xx.97: Name or service not known
2015/08/30 07:40:01 [8836] connect from UNKNOWN (xx.xx.xx.97)
2015/08/30 07:40:01 [8836] auth failed on module tomcat_backup from unknown (xx.xx.xx.97): password mismatch
Here's the actual rsync.sh command called from xx.xx.xx.97 (the requester):
export RSYNC_PASSWORD=rsyncclient
rsync -havz --log-file=/usr/local/bin/RSync/test.log rsync://rsyncclient#xx.xx.xx.96/tomcat_backup/ProcessSniffer/ /usr/local/bin/ProcessSniffer
Here's the rsyncd.conf on xx.xx.xx.97:
lock file = /var/run/rsync.lock
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
[files]
name = tomcat_backup
path = /usr/local/bin/
comment = The copy/backup of tomcat from .96
uid = tomcat
gid = tomcat
read only = no
list = yes
auth users = rsyncclient
secrets file = /etc/rsyncd.secrets
hosts allow = xx.xx.xx.96/255.255.255.0
Here's the rsyncd.secrets on xx.xx.xx.97:
files:files
Here's the rsyncd.conf on xx.xx.xx.96 (the supplier of files):
Note: there is a 'cwrsync' (Windows version of rsync) successfully calling for files also (xx.xx.xx.100)
Note: yes, there is the possibility of xx.96 requesting files from xx.97. However, this is NOT actually happening.
It's commented out of the init.d mechanism.
lock file = /var/run/rsync.lock
log file = /var/log/rsync.log
pid file = /var/run/rsync.pid
strict modes = false
[files]
name = tomcat_backup
path = /usr/local/bin
comment = The copy/backup of tomcat from xx.97
uid = tomcat
gid = tomcat
read only = no
list = yes
auth users = rsyncclient
secrets file = /etc/rsyncd.secrets
hosts allow = xx.xx.xx.97/255.255.255.0, xx.xx.xx.100/255.255.255.0
Here's the rsyncd.secrets on xx.xx.xx.97:
files:files
It was something else. I had a script calling the rsync command, and that was causing the problem. The actual rsync command line was ok.
Apologies.
This is what I have been through when I got this error. My first thinking was to check rsync server log. and it is not in the place configured in rsync.conf. Then I checked the log printed in systemctl status rsyncd
rsyncd[23391]: auth failed on module signaling from unknown (172.28.15.10): missing secret for user "rsync_backup"
rsyncd[23394]: Badly formed boolean in configuration file: "no # rsync daemon before transmission, change to the root directory and limited within.".
rsyncd[23394]: params.c:Parameter() - Ignoring badly formed line in configuration file: ignore errors # ignore some io error informations.
rsyncd[23394]: Badly formed boolean in configuration file: "false # if true, cannot upload file to this server.".
rsyncd[23394]: Badly formed boolean in configuration file: "false # if true, cannot download file from this server.".
rsyncd[23394]: Badly formed boolean in configuration file: "false # if true, can only list files here.".
Combining the fact that log configuration does not come into play. It seems that the comment after each line of configuration in rsync.conf makes configurations invalid. So I deleted those # ... and restart rsyncd.