Why might I get expected: MappingNode was SequenceNode during kpt pkg get? - kubernetes

I am undertaking https://www.kubeflow.org/docs/distributions/gke/deploy/deploy-cli/ and at the stage bash ./pull-upstream.sh there is a problem and I have isolated it to a single command inside the scripts:
kpt pkg get https://github.com/zijianjoy/pipelines.git/manifests/kustomize/#upgradekpt upstream
When I run this command alone, I get the same error as when it runs in the script:
Package "upstream":
Fetching https://github.com/zijianjoy/pipelines#upgradekpt
From https://github.com/zijianjoy/pipelines
* branch upgradekpt -> FETCH_HEAD
Adding package "manifests/kustomize".
Fetched 1 package(s).
Error: /home/tester_user/gcp-blueprints/kubeflow/apps/pipelines/upstream/third-party/argo/upstream/manifests/namespace-install/overlays/argo-server-deployment.yaml: wrong Node Kind for expected: MappingNode was SequenceNode: value: {- op: add
path: /spec/template/spec/containers/0/args/-
value: --namespaced}
I made some mistakes following the script during the setup (that I think I corrected) so it could be something I did. It would be good to know why this error is happening even so for my general understanding.
I am on google cloud platform, in the command line prompt that comes built in to the web ui.

Related

Gitlab CI pipeline failing: a tag issue

My gitlab CI pipeline is setup to run maven tests from a docker image created from my maven project.
I have tested the pipeline on my master branch and it worked fine and ran the test.
However I have created a new feature branch and now running the pipeline yet again, however I now get this error
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: getting tag for destination: repository can only contain the runes `abcdefghijklmnopqrstuvwxyz0123456789_-./`: it2901/cs344-maven:feature/produce-allocation-pdf
ERROR: Job failed: command terminated with exit code 1
I can't seem to pinpoint the problem at all. I have also pushed the tag: tut3 to the feature branch as well.
Here is my .gitlab-ci.yml: https://controlc.com/7a94a00f
Based on what you shared, you have this configured:
VERSIONLABELMETHOD: "tut3" # options: "","LastVersionTagInGit"
It should be either:
VERSIONLABELMETHOD: ""
or
VERSIONLABELMETHOD: "LastVersionTagInGit"
or
VERSIONLABELMETHOD: "OnlyIfThisCommitHasVersion"
When you specify "tut3", the script takes it as if it was "" (empty string). Assuming you didn't define $VERSIONLABEL anywhere $ADDITIONALTAGLIST will also be empty.
And later in the code you can see that this gets executed:
if [[ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]]; then ADDITIONALTAGLIST="$ADDITIONALTAGLIST latest"; fi
Assuming $CI_DEFAULT_BRANCH is set to master if you use a separate branch mybranch the code above won't get executed so it's likely that the Kaniko command line doesn't have any a neither a valid $FORMATTEDTAGLIST or $IMAGE_LABELS.
You can debug by seeing their output on the script which is happening at the end before calling Kaniko:
...
echo $FORMATTEDTAGLIST
echo $IMAGE_LABELS
mkdir -p /kaniko/.docker
...
A hack would be to override $CI_DEFAULT_BRANCH with your custom branch.
✌️

How can I get "HelloWorld - BitBake Style" working on a newer version of Yocto?

In the book "Embedded Linux Systems with the Yocto Project", Chapter 4 contains a sample called "HelloWorld - BitBake style". I encountered a bunch of problems trying to get the old example working against the "Sumo" release 2.5.
If you're like me, the first error you encountered following the book's instructions was that you copied across bitbake.conf and got:
ERROR: ParseError at /tmp/bbhello/conf/bitbake.conf:749: Could not include required file conf/abi_version.conf
And after copying over abi_version.conf as well, you kept finding more and more cross-connected files that needed to be moved, and then some relative-path errors after that... Is there a better way?
Here's a series of steps which can allow you to bitbake nano based on the book's instructions.
Unless otherwise specified, these samples and instructions are all based on the online copy of the book's code-samples. While convenient for copy-pasting, the online resource is not totally consistent with the printed copy, and contains at least one extra bug.
Initial workspace setup
This guide assumes that you're working with Yocto release 2.5 ("sumo"), installed into /tmp/poky, and that the build environment will go into /tmp/bbhello. If you don't the Poky tools+libraries already, the easiest way is to clone it with:
$ git clone -b sumo git://git.yoctoproject.org/poky.git /tmp/poky
Then you can initialize the workspace with:
$ source /tmp/poky/oe-init-build-env /tmp/bbhello/
If you start a new terminal window, you'll need to repeat the previous command which will get get your shell environment set up again, but it should not replace any of the files created inside the workspace from the first time.
Wiring up the defaults
The oe-init-build-env script should have just created these files for you:
bbhello/conf/local.conf
bbhello/conf/templateconf.cfg
bbhello/conf/bblayers.conf
Keep these, they supersede some of the book-instructions, meaning that you should not create or have the files:
bbhello/classes/base.bbclass
bbhello/conf/bitbake.conf
Similarly, do not overwrite bbhello/conf/bblayers.conf with the book's sample. Instead, edit it to add a single line pointing to your own meta-hello folder, ex:
BBLAYERS ?= " \
${TOPDIR}/meta-hello \
/tmp/poky/meta \
/tmp/poky/meta-poky \
/tmp/poky/meta-yocto-bsp \
"
Creating the layer and recipe
Go ahead and create the following files from the book-samples:
meta-hello/conf/layer.conf
meta-hello/recipes-editor/nano/nano.bb
We'll edit these files gradually as we hit errors.
Can't find recipe error
The error:
ERROR: BBFILE_PATTERN_hello not defined
It is caused by the book-website's bbhello/meta-hello/conf/layer.conf being internally inconsistent. It uses the collection-name "hello" but on the next two lines uses _test suffixes. Just change them to _hello to match:
# Set layer search pattern and priority
BBFILE_COLLECTIONS += "hello"
BBFILE_PATTERN_hello := "^${LAYERDIR}/"
BBFILE_PRIORITY_hello = "5"
Interestingly, this error is not present in the printed copy of the book.
No license error
The error:
ERROR: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb: This recipe does not have the LICENSE field set (nano)
ERROR: Failed to parse recipe: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
Can be fixed by adding a license setting with one of the values that bitbake recognizes. In this case, add a line onto nano.bb of:
LICENSE="GPLv3"
Recipe parse error
ERROR: ExpansionError during parsing /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
[...]
bb.data_smart.ExpansionError: Failure expanding variable PV_MAJOR, expression was ${#bb.data.getVar('PV',d,1).split('.')[0]} which triggered exception AttributeError: module 'bb.data' has no attribute 'getVar'
This is fixed by updating the special python commands being used in the recipe, because #bb.data was deprecated and is now removed. Instead, replace it with #d, ex:
PV_MAJOR = "${#d.getVar('PV',d,1).split('.')[0]}"
PV_MINOR = "${#d.getVar('PV',d,1).split('.')[1]}"
License checksum failure
ERROR: nano-2.2.6-r0 do_populate_lic: QA Issue: nano: Recipe file fetches files and does not have license file information (LIC_FILES_CHKSUM) [license-checksum]
This can be fixed by adding a directive to the recipe telling it what license-info-containing file to grab, and what checksum we expect it to have.
We can follow the way the recipe generates the SRC_URI, and modify it slightly to point at the COPYING file in the same web-directory. Add this line to nano.bb:
LIC_FILES_CHKSUM = "${SITE}/v${PV_MAJOR}.${PV_MINOR}/COPYING;md5=f27defe1e96c2e1ecd4e0c9be8967949"
The MD5 checksum in this case came from manually downloading and inspecting the matching file.
Done!
Now bitbake nano ought to work, and when it is complete you should see it built nano:
/tmp/bbhello $ find ./tmp/deploy/ -name "*nano*.rpm*"
./tmp/deploy/rpm/i586/nano-dbg-2.2.6-r0.i586.rpm
./tmp/deploy/rpm/i586/nano-dev-2.2.6-r0.i586.rpm
I have recently worked on that hands-on hello world project. As far as I am concerned, I think that the source code in the book contains some bugs. Below there is a list of suggested fixes:
Inheriting native class
In fact, when you build with bitbake that you got from poky, it builds only for the target, unless you mention in your recipe that you are building for the host machine (native). You can do the latter by adding this line at the end of your recipe:
inherit native
Adding license information
It is worth mentioning that the variable LICENSE is important to be set in any recipe, otherwise bitbake rises an error. In our case, we try to build the version 2.2.6 of the nano editor, its current license is GPLv3, hence it should be mentioned as follow:
LICENSE = "GPLv3"
Using os.system calls
As the book states, you cannot dereference metadata directly from a python function. Which means it is mandatory to access metadata through the d dictionary. Bellow, there is a suggestion for the do_unpack python function, you can use its concept to code the next tasks (do_configure, do_compile):
python do_unpack() {
workdir = d.getVar("WORKDIR", True)
dl_dir = d.getVar("DL_DIR", True)
p = d.getVar("P", True)
tarball_name = os.path.join(dl_dir, p+".tar.gz")
bb.plain("Unpacking tarball")
os.system("tar -x -C " + workdir + " -f " + tarball_name)
bb.plain("tarball unpacked successfully")
}
Launching the nano editor
After successfully building your nano editor package, you can find your nano executable in the following directory in case you are using Ubuntu (arch x86_64):
./tmp/work/x86_64-linux/nano/2.2.6-r0/src/nano
Should you have any comments or questions, Don't hesitate !

Unhelpful output from pytest

TLDR: How can I get better output from pytest?
I'm using Django with regular python3 unittests.
I've just switched to pytest-django for running tests.
pytest throws an error for almost all my tests (149 in total).
Pages and pages with this error.
self = <RegexURLResolver 'project.urls' (None:None) ^/>
#property
def reverse_dict(self):
language_code = get_language()
if language_code not in self._reverse_dict:
self._populate()
> return self._reverse_dict[language_code]
E KeyError: 'en-us'
Which wasn't the problem. It led me down to a wrong path.
I had a syntax error in one of my views.py files.
./manage.py test resulted in:
snip
File "/home/roland/project/views.py", line 20
code = zip(list1, list2])
SyntaxError: invalid syntax
Notice the last: ] which was the problem.
So: How can I get more useful output on problems when using pytest?
Btw:
After finding this and scrolling back into the pytest output there was mention of the syntax error. It was just buried in the output.
You can use the --maxfail=1 option so it will stop immediately on first failure.
Also, make sure your pytest.ini is setup properly so that pytest knows it should be using django-pyest.
[pytest]
DJANGO_SETTINGS_MODULE='myapp.settings'
For my workflow, I usually do the following:
run pytest --maxfail=1 myfile.py &> pytest-output.txt
tail, grep, or search he text file for errors.
Fix and iterate
There are a lot of other configuration options that will help you to get more meaningful input from pytest.

How can I debug my python unit tests within Tox with PUDB?

I'm trying to debug a python codebase that uses tox for unit tests. One of the failing tests is proving difficult due to figure out, and I'd like to use pudb to step through the code.
At first thought, one would think to just pip install pudb then in the unit test code add in import pudb and pudb.settrace(). But that results in a ModuleNotFoundError:
> import pudb
>E ModuleNotFoundError: No module named 'pudb'
>tests/mytest.py:130: ModuleNotFoundError
> ERROR: InvocationError for command '/Users/me/myproject/.tox/py3/bin/pytest tests' (exited with code 1)
Noticing the .tox project folder leads one to realize there's a site-packages folder within tox, which makes sense since the point of tox is to manage testing under different virtualenv scenarios. This also means there's a tox.ini configuration file, with a deps section that may look like this:
[tox]
envlist = lint, py3
[testenv]
deps =
pytest
commands = pytest tests
adding pudb to the deps list should solve the ModuleNotFoundError, but leads to another error:
self = <_pytest.capture.DontReadFromInput object at 0x103bd2b00>
def fileno(self):
> raise UnsupportedOperation("redirected stdin is pseudofile, "
"has no fileno()")
E io.UnsupportedOperation: redirected stdin is pseudofile, has no fileno()
.tox/py3/lib/python3.6/site-packages/_pytest/capture.py:583: UnsupportedOperation
So, I'm stuck at this point. Is it not possible to use pudb instead of pdb within Tox?
There's a package called pytest-pudb which overrides the pudb entry points within an automated test environment like tox to successfully jump into the debugger.
To use it, just make your tox.ini file have both the pudb and pytest-pudb entries in its testenv dependencies, similar to this:
[tox]
envlist = lint, py3
[testenv]
deps =
pytest
pudb
pytest-pudb
commands = pytest tests
Using the original PDB (not PUDB) could work too. At least it works on Django and Nose testers. Without changing tox.ini, simply add a pdb breakpoint wherever you need, with:
import pdb; pdb.set_trace()
Then, when it get to that breakpoint, you can use the regular PDB commands:
w - print stacktrace
s - step into
n - step over
c - continue
p - print an argument value
a - print arguments of current function

capistrano upload! thinks ~ referenced local directory is on remote server

So every example I've looked up indicates this is how one is supposed to do it but I think I may have found a bug unless there's another way to do this.
I'm using upload! to upload assets to a remote list of servers. The task looks like this:
desc "Upload grunt compiled css/js."
task :upload_assets do
on roles(:all) do
%w{/htdocs/css /htdocs/js}.each do |asset|
upload! "#{fetch(:local_path) + asset}", "#{release_path.to_s + '/' + asset}", recursive: true
end
end
end
If local_path is defined as an absolute path such as:
set :local_path:, '/home/dcmbrown/projects/ABC'
This works fine. However if I do the following:
set :local_path:, '~/projects/ABC'
I end up getting the error:
The deploy has failed with an error: Exception while executing on ec2-54-23-88-125.us-west-2.compute.amazon.com: No such file or directory - ~/projects/ABC/htdocs/css
It's not a ' vs " issue as I've tried both (and I didn't think capistrano paid attention to that anyway).
Is this a bug? Is there a work around? Am I just doing it wrong?
I ended up discovering the best way to do this is to actually use path expansion! (headsmack)
irb> File.expand_path('~dcmbrown/projects/ABC')
=> "/home/dcmbrown/projects/ABC"
Of course what I'd like is to do automatic path expansion but you can't have everything. I think I was mostly dumbstruck that it didn't automatically; so much so I spent a couple of hours trying to figure out why it didn't work and ended up wasting time asking here. :(
I don't think the error is coming from the remote server, it just looks like it since it's running that upload command in the context of a deploy.
I just created a single cap task to just do an upload using the "~" character and it also fails with
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing as deploy#XXX: No such file or directory # rb_file_s_stat - ~/Projects/testapp/public/404.html
It appears to be a Ruby issue not Capistrano as this also fails in a Ruby console
~/Projects/testapp $ irb
2.2.2 :003 > File.stat('~/Projects/testapp/public/404.html')
Errno::ENOENT: No such file or directory # rb_file_s_stat - ~/Projects/testapp/public/404.html
from (irb):3:in `stat'
from (irb):3
from /Users/supairish/.rvm/rubies/ruby-2.2.2/bin/irb:11:in `<main>'