Buildbot 'try' command starts a build but does not actually apply patch - buildbot

When I attempt a buildbot try command, the patch is sent and the build starts, but the patch is never actually applied.
My setup uses SVN, with 2 source control steps:
c['change_source'].append(SVNPoller("%s/trunk/a" % base_url , pollinterval=10))
c['change_source'].append(SVNPoller("%s/trunk/b" % base_url , pollinterval=10))
and...
self.addStep(SVN(repourl="%s/trunk/a" % base_url, workdir="build/a"))
self.addStep(SVN(repourl="%s/trunk/b" % base_url, workdir="build/b"))
These get put into the build directory on the slave like:
build/a/...
build/b/...
Then I attempt to run the 'try' command from my local computer:
svn co '.../trunk/a'
cd a
update some files
buildbot try --vc svn --connect pb -m192.168.0.100:5555 \
-uuser --passwd=pass -w user -C "comment" --topdir="a"
I can see on the server that the patch is generated:
svn update ( 11 secs )
patch
stdio
svn_1 update ( 3 secs )
patch
stdio
and the patch looks correct-ish
Index: mmfx/project/se_lib_tests/mmif_unit_tests.c
===================================================================
--- mmfx/project/se_lib_tests/mmif_unit_tests.c (revision 5952)
+++ mmfx/project/se_lib_tests/mmif_unit_tests.c (working copy)
...
However, the patch is never actually applied to the source files. My suspicion is that buildbot doesn't know how to apply the patch to just the 'build/a' tree -- it attempts to do it to the 'build' tree, and silently fails.
Any ideas how to make this work right?
Thanks,
- Caleb

Related

How to patch remote source code locally in yocto project?

Sometimes, We meet a situation that remote source code fetched by a recipe need to be modified so that suit a specific machine.
How do we create a patch for remote source code locally? After that everytime we build the recipe (even clean it all) we can patch the remote source code automatically.
For example, I have a special machine with architecture A which is not common, so the remote source code need to be modified so that support architecture A.
Suppose there was a file called utils.h (which is code that we fetched by example.bb from remote git repository)
#if defined(__x86_64__) || \
defined(__mips__) || \
defined(__powerpc__) || defined(__ppc__) || defined(__ppc64__) \
#define SOME_FUNCTIONALITY 1
Apparently I need to add archtecture A support in the file.
#if defined(__x86_64__) || \
defined(__mips__) || \
defined(__powerpc__) || defined(__ppc__) || defined(__ppc64__) || \
defined(__A__) \
#define SOME_FUNCTIONALITY 1
But if we just modified like that, next time we execute
bitbake -c cleanall example
bitbake example
then we get a unchanged copies again(which means we have to modify it again).
How do we create a Add-architecture-A-support.patch locally so that we can patch the remote source code automatically?
This is a simple one from answers.
(Note: If there was no git in the source code directory, before modifying the source code, you need to create a git repository and commit all in the top directory of the source code.)
git init # create a git repository
git add .
git commit -m "First commit" # first commit
After change the utils.h as above, we can check the git status. It usually looks like that.
$ git status
HEAD detached from 87b933c420
Changes not staged for commit:
(use "git add <file>..." to update what will be comitted)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: ../../utils.h
...
no changes added to commit (use "git add" and/or "git commit -a")
Then we add and commit the change locally (usually we don't have the permission to push to upper stream).
$ git add utils.h
$ git commit -m "Patch test"
After that we can use git to create a patch for the recent commit.
$ git show >Add-architecture-A-support.patch
It will creat a patch in the current directory with contents looks like that
commit a79e523...
Author: 杨...
Date: ...
Patch test
diff --git a/somedir/utils.h b/somedir/utils.h
index 20bfd36c84..
--- a/somedir/utils.h
+++ b/somedir/utils.h
...
+ defined(__A__) \
...
Then we can move the patch to the local layer where the recipe stayed.
recipe-example
|-- example
| |-- Add-architecture-A-support.patch
|-- example.bb
And add the patch in example.bb with this.
SRC_URI += "\
file://Add-architecture-A-support.patch \
"
Work finished. (Also, if want to undo the local commit after creating the patch, you can use git reset HEAD^ utils.h. emmm, I think so, maybe there are some faults, just google it)

How to update modules.conf for SELINUX in BUILDROOT?

looking to disable some SELinux modules (set to off) and create others in modules.conf. I don't see an obvious way of updating modules.conf as I tried adding my changes as a modules.conf patch but it failed given that the modules.conf file gets built and is not just downloaded by BR so it is not available for patching like other things under the refpolicy directory:
Build window output:
refpolicy 2.20190609 PatchingApplying 0001-refpolicy-update-modules-conf.patch using patch:
can't find file to patch at input line 3
I did see in the log that there is a support/sedoctool.py that autogenerates the policy/modules.conf file so that the file is NOT patchable like most other things in the ref policy.
The relevant section of the buildroot/output/build/refpolicy-2.20190609/Makefile:
# policy building support tools
support := support
genxml := $(PYTHON) $(support)/segenxml.py
gendoc := $(PYTHON) $(support)/sedoctool.py
<...snip...>
########################################
#
# Create config files
#
conf: $(mod_conf) $(booleans) generate$(booleans) $(mod_conf): conf.intermediate.INTERMEDIATE: conf.intermediate
conf.intermediate: $(polxml)
#echo "Updating $(booleans) and $(mod_conf)"
$(verbose) $(gendoc) -b $(booleans) -m $(mod_conf) -x $(polxml)
Part of the hsmlinux build.log showing the sedoctool.py (gendoc) being run:
Updating policy/booleans.conf and policy/modules.conf
.../build-buildroot-sawshark/buildroot/output/host/usr/bin/python3 support/sedoctool.py -b policy/booleans.conf -m policy/modules.conf -x doc/policy.xml
I'm sure there is a standard way of doing this, just doesn't seem to be documented anywhere I can find.
Thanks.
Turns out that the sedoctool.py script is reading the doc/policy.xml. Looking at sedoctool.py:
#modules enabled and disabled values
MOD_BASE = "base"
MOD_ENABLED = "module"
MOD_DISABLED = "off"
<...snip...>
def gen_module_conf(doc, file_name, namevalue_list):
"""
Generates the module configuration file using the XML provided and the
previous module configuration.
"""
# If file exists, preserve settings and modify if needed.
# Otherwise, create it.
<...snip...>
mod_name = node.getAttribute("name")
mod_layer = node.parentNode.getAttribute("name")
<...snip...>
if mod_name and mod_layer:
file_name.write("# Layer: %s\n# Module: %s\n" % (mod_layer,mod_name))
if required:
file_name.write("# Required in base\n")
file_name.write("#\n")
if [mod_name, MOD_DISABLED] in namevalue_list:
file_name.write("%s = %s\n\n" % (mod_name, MOD_DISABLED))
# If the module is set as enabled.
elif [mod_name, MOD_ENABLED] in namevalue_list:
file_name.write("%s = %s\n\n" % (mod_name, MOD_ENABLED))
# If the module is set as base.
elif [mod_name, MOD_BASE] in namevalue_list:
file_name.write("%s = %s\n\n" % (mod_name, MOD_BASE))
So sedoctool.py has the nice feature of: "# If file exists, preserve settings and modify if needed." and modules.conf can just be added whole here via a complete file patch and the modules that are not desired set as "off" : refpolicy-2.20190609/policy/modules.conf and the script will update as needed based on desired policy.
One more detail is that in the next stage of the refpolicy Makefile (Building) the modules.conf with the updates is deleted in the beginning which kind of clashes with the ability of sedoctool to preserve the patched version of modules.conf...so patched the removal in the Building stage of the Makefile.
[7m>>> refpolicy 2.20190609 Building^[
<...snip...>
rm -f policy/modules.conf
The Makefile in refpolicy-2.20190609 has this line that I patched out because we are patching in our own modules.conf:
bare: clean
<...snip...>
$(verbose) rm -f $(mod_conf)
That patch looks like:
--- BUILDROOT/Makefile 2020-08-17 13:25:06.963804709 -0400
+++ FIX/Makefile 2020-08-17 19:25:29.540607763 -0400
## -636,7 +636,6 ##
$(verbose) rm -f $(modxml)
$(verbose) rm -f $(tunxml)
$(verbose) rm -f $(boolxml)
- $(verbose) rm -f $(mod_conf)
$(verbose) rm -f $(booleans)
$(verbose) rm -fR $(htmldir)
$(verbose) rm -f $(tags)
BTW,
Creating a patch with a complete new file in pp1:q!:
diff -crB --new-file pp0 pp1 > pp0.patch

Gitlab CI pipeline failing: a tag issue

My gitlab CI pipeline is setup to run maven tests from a docker image created from my maven project.
I have tested the pipeline on my master branch and it worked fine and ran the test.
However I have created a new feature branch and now running the pipeline yet again, however I now get this error
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: getting tag for destination: repository can only contain the runes `abcdefghijklmnopqrstuvwxyz0123456789_-./`: it2901/cs344-maven:feature/produce-allocation-pdf
ERROR: Job failed: command terminated with exit code 1
I can't seem to pinpoint the problem at all. I have also pushed the tag: tut3 to the feature branch as well.
Here is my .gitlab-ci.yml: https://controlc.com/7a94a00f
Based on what you shared, you have this configured:
VERSIONLABELMETHOD: "tut3" # options: "","LastVersionTagInGit"
It should be either:
VERSIONLABELMETHOD: ""
or
VERSIONLABELMETHOD: "LastVersionTagInGit"
or
VERSIONLABELMETHOD: "OnlyIfThisCommitHasVersion"
When you specify "tut3", the script takes it as if it was "" (empty string). Assuming you didn't define $VERSIONLABEL anywhere $ADDITIONALTAGLIST will also be empty.
And later in the code you can see that this gets executed:
if [[ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]]; then ADDITIONALTAGLIST="$ADDITIONALTAGLIST latest"; fi
Assuming $CI_DEFAULT_BRANCH is set to master if you use a separate branch mybranch the code above won't get executed so it's likely that the Kaniko command line doesn't have any a neither a valid $FORMATTEDTAGLIST or $IMAGE_LABELS.
You can debug by seeing their output on the script which is happening at the end before calling Kaniko:
...
echo $FORMATTEDTAGLIST
echo $IMAGE_LABELS
mkdir -p /kaniko/.docker
...
A hack would be to override $CI_DEFAULT_BRANCH with your custom branch.
✌️

unable to trigger job in concourse

I was new to concourse, and set up the environment in my centos7.6 like below.
$ wget https://concourse-ci.org/docker-compose.yml
$ docker-compose up -d
Then login by `fly --target example login --team-name main --concourse-url http://192.168.77.140:8080/ -u test -p test`
I can see below.
[root#centostest ~]# fly targets
name url team expiry
example http://192.168.77.140:8080 main Sun, 16 Jun 2019 02:23:48 UTC
I used below yaml.xml named with 2.yaml
---
resources:
- name: my-git-repo
type: git
source:
uri: https://github.com/ruanbekker/concourse-test
branch: basic-helloworld
jobs:
- name: hello-world-job
public: true
plan:
- get: my-git-repo
- task: task_print-hello-world
file: my-git-repo/ci/task-hello-world.yml
Then I run below commands step by step.
fly -t example sp -c 2.yaml -p pipeline-01
fly -t example up -p pipeline-01
fly -t example tj -j pipeline-01/hello-world-job --watch
But i just hang on there , no useful response like below.
[root#centostest ~]# fly -t example tj -j pipeline-01/hello-world-job --watch
started pipeline-01/hello-world-job #3
Theoretically, it should print something like below.
Cloning into '/tmp/build/get'...
Fetching HEAD
292c84b change task name
initializing
running echo hello world
hello world
succeeded
Where I did wrong? thanks.
welcome to Concourse!
One thing that can be confusing when starting with Concourse is understanding when Concourse detects that the pipeline has changed and what happens if the pipeline is one file or multiple files.
Your pipeline (as the majority of real-world pipelines) is "nested": main pipeline file 2.yaml refers to a task file named my-git-repo/ci/task-hello-world.yml
What sets Concourse apart from other CI systems is that:
the main pipeline file (2.yaml) can reside everywhere, also in a different repository.
Due to 1, Concourse is unable to detect a change to the main pipeline file, you have to tell Concourse that the file has changed, either with fly set-pipeline or with automatic means such as the concourse-pipeline-resource.
So the following errors happen often:
Changing the main pipeline file, committing and pushing, and expecting Concourse to pick up the change. Missing: you have to do fly set-pipeline
Once doing fly set-pipeline becomes second nature, you can stumble upon the opposite error: Change both the main pipeline file and the nested task file, not pushing, doing set-pipeline. In this case, the only changes picked up by Concourse will be the ones to the main pipeline file, not to the task file. Missing: commit and push.
From the description of your problem, I have the feeling that it is a mixture of the gotchas I mentioned.

svn2git error PROPFIND request failed

I have Ruby, RubyGems, and svn2git installed under 32 bit windows 7.
svn2git https://code.google.com/p/skyrim-plugin-decoding-project/ --rootistrunk --revision 1:1693 --authors ~/authors.txt --verbose
The above line returns the following error:
Running command: git svn init --prefix=svn/ --no-metadata --trunk=https://code.g
oogle.com/p/skyrim-plugin-decoding-project/
Initialized empty Git repository in e:/tes5edit/.git/
RA layer request failed: PROPFIND request failed on '/p/skyrim-plugin-decoding-p
roject': PROPFIND of '/p/skyrim-plugin-decoding-project': 405 Method Not Allowed
(https://code.google.com) at /usr/lib/perl5/site_perl/Git/SVN.pm line 310
command failed:
git svn init --prefix=svn/ --no-metadata --trunk=https://code.google.com/p/skyri
m-plugin-decoding-project/
I read something about svnadmin so I tried the following
svnadmin: E205000: Repository argument required
I don't know what the argument would be.
I have never used GitBash or any of these programs. I have no idea what the proper commands would be to resolve the issue. I am also new to Git and have very little experience with it.
git svn clone http://my-project.googlecode.com/svn/ \
--authors-file=users.txt --no-metadata -s my_project
The standard commands also give errors
E:\TES5Edit_Git> git svn init https://code.google.com/p/skyrim-plugin-decoding-p
roject/
Initialized empty Git repository in E:/TES5Edit_Git/.git/
E:\TES5Edit_Git [master]> git config svn.authorsfile ./authors.txt
E:\TES5Edit_Git [master +1 ~0 -0 !]> git svn fetch
RA layer request failed: PROPFIND request failed on '/p/skyrim-plugin-decoding-p
roject': PROPFIND of '/p/skyrim-plugin-decoding-project': 405 Method Not Allowed
(https://code.google.com) at /usr/lib/perl5/site_perl/Git/SVN.pm line 148
E:\TES5Edit_Git [master +1 ~0 -0 !]>
As long as it makes a repo I can push I don't care how I do it. However, I did not start with a standard setup in the beginning and no idea what I was doing. So I want the clone to start at commit 1 and consider root as master, and all commits that make any kind of folder, rename folders, move folders, delete folders, all of everything created as branches.
After asking some friends I realized I had been using the wrong URL.
svn2git http://skyrim-plugin-decoding-project.googlecode.com/svn/ --rootistrunk --revision 1:1693 --authors ~/authors.txt --verbose
That would have been the correct init statment