my pipeline for auto devops in gitlab not working property - kubernetes

this is my test
i run the job with autodevops runner
i run this without .gitlab-ci.yaml file. but i receive the error like this.
$ if [[ -z "$CI_COMMIT_TAG" ]]; then # collapsed multi-line command
$ /build/build.sh
Building Heroku-based application using gliderlabs/herokuish docker image...
Attempting to pull a previously built image for use with --cache-from...
invalid reference format
invalid reference format
No previously cached image found. The docker build will proceed without using a cached image
invalid argument "/master:fa1708343b13496937aa567a1aecdc184f43d197" for "-t, --tag" flag: invalid reference format
See 'docker build --help'.
Cleaning up file based variables
30:00
ERROR: Job failed: command terminated with exit code 1

Related

GitLab K8s Runner fails for get_sources

we are trying to move gitlab-runners from standard CentOS VMs to kebernetes.
But after setup and registration, pipeline fails with unknown error:
Running with gitlab-runner 15.7.0 (259d2fd4)
on Kubernetes-local JXRw3mH1
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab-runner
Using Kubernetes executor with image gitlab-test.domain:5005/image:latest ...
Using attach strategy to execute scripts...
Preparing environment
00:04
Waiting for pod gitlab-runner/runner-jxrw3mh1-project-290-concurrent-0dpd88 to be running, status is Pending
Running on runner-jxrw3mh1-project-290-concurrent-0dpd88 via gitlab-runner-d7df6c548-hsgxg...
Getting source from Git repository
00:00
error: could not lock config file /root/.gitconfig: Read-only file system
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: command terminated with exit code 1
Inside the log of the job pod we found:
helper Running on runner-jxrw3mh1-project-290-concurrent-0dpd88 via gitlab-runner-d7df6c548-hsgxg...
helper
helper {"command_exit_code": 0, "script": "/scripts-290-207166/prepare_script"}
helper error: could not lock config file /root/.gitconfig: Read-only file system
helper
helper {"command_exit_code": 1, "script": "/scripts-290-207166/get_sources"}
helper
helper {"command_exit_code": 0, "script": "/scripts-290-207166/cleanup_file_variables"}
Inside the log of the gitlab-runner pod we found:
Starting in container "helper" the command ["gitlab-runner-build" "<<<" "/scripts-290-207167/get_sources" "2>&1 | tee -a /logs-290-207167/output.log"] with script: #!/usr/bin/env bash
if set -o | grep pipefail > /dev/null; then set -o pipefail; fi; set -o errexit
set +o noclobber
: | eval $'export FF_CMD_DISABLE_DELAYED_ERROR_LEVEL_EXPANSION=$\'false\'\nexport FF_NETWORK_PER_BUILD=$\'false\'\nexport FF_USE_LEGACY_KUBERNETES_EXECUTION_STRATEGY=$\'false\'\nexport FF_USE_DIRECT_DOWNLOAD
exit 0
job=207167 project=290 runner=JXRw3mH1
Remote process exited with the status: CommandExitCode: 1, Script: /scripts-290-207167/get_sources job=207167 project=290 runner=JXRw3mH1
Container "helper" exited with error: command terminated with exit code 1 job=207167 project=290 runner=JXRw3mH1
notes:
the error "error: could not lock config file /root/.gitconfig: Read-only file system" is due to the current user inside container is different by root
the file /logs-290-207167/output.log contains the log of the job pod
Inside job pod shell we also tested some git commands and perform successfully fetch and clone using our personal credentials (the same user that perform the run of the pipeline from gitlab gui).
We think the problem can be related on gitlab-ci-token, but we have finished our investigation... :frowning:

Operator-SDK Error, "CRD is present in bundle but not defined in CSV"

I get the error "CRD is present in bundle but not defined in CSV" when I run make bundle.
The full output is
/Users/foobar/Documents/my-operator/bin/controller-gen "crd:trivialVersions=true,preserveUnknownFields=false" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
operator-sdk generate kustomize manifests -q
cd config/manager && /Users/foobar/Documents/my-operator/bin/kustomize edit set image controller=registry.io/my-operator:latest
/Users/foobar/Documents/my-operator/bin/kustomize build config/manifests | operator-sdk generate bundle -q --overwrite --version 0.0.5
Error: accumulating resources: 2 errors occurred:
* accumulateFile error: "accumulating resources from '../samples': '/Users/foobar/Documents/my-operator/config/samples' must resolve to a file"
* accumulateDirector error: "recursed accumulation of path '/Users/foobar/Documents/my-operator/config/samples': accumulating resources: 2 errors occurred:\n\t* accumulateFile error: \"accumulating resources from 'myapplicationui.yaml': evalsymlink failure on '/Users/foobar/Documents/my-operator/config/samples/myapplicationui.yaml' : lstat /Users/foobar/Documents/my-operator/config/samples/myapplicationui.yaml: no such file or directory\"\n\t* loader.New error: \"error loading myapplicationui.yaml with git: url lacks orgRepo: myapplicationui.yaml, dir: evalsymlink failure on '/Users/foobar/Documents/my-operator/config/samples/myapplicationui.yaml' : lstat /Users/foobar/Documents/my-operator/config/samples/myapplicationui.yaml: no such file or directory, get: invalid source string: myapplicationui.yaml\"\n\n"
INFO[0000] Building annotations.yaml
INFO[0000] Writing annotations.yaml in /Users/foobar/Documents/my-operator/bundle/metadata
INFO[0000] Building Dockerfile
INFO[0000] Writing bundle.Dockerfile in /Users/foobar/Documents/my-operator
operator-sdk bundle validate ./bundle
INFO[0000] Found annotations file bundle-dir=bundle container-tool=docker
INFO[0000] Could not find optional dependencies file bundle-dir=bundle container-tool=docker
ERRO[0000] Error: Value myapplication.example.com/v1alpha1, Kind=MyApplication: CRD "myapplication.example.com/v1alpha1, Kind=MyApplication" is present in bundle "my-operator.v0.0.5" but not defined in CSV
ERRO[0000] Error: Value myapplication.example.com/v1alpha1, Kind=MyApplicationUI: CRD "myapplication.example.com/v1alpha1, Kind=MyApplicationUI" is present in bundle "my-operator.v0.0.5" but not defined in CSV
What is the cause of this error?
The error on the bottom is a red herring. The actual error is further up and uncolored when you experience it in person.
Specifically, a Kustomize yaml is expecting an myapplicationui.yaml but can't find it.
This can easily happen when someone in your team attempts to rename files (e.g. to myapplicationui_sample.yaml) without checking all of the references.

Continue after a failing command in appveyor

In appveyor I use the statement:
- initexmf --admin --force --mklinks
but due to a problem it gives the message:
initexmf --admin --force --mklinks
Sorry, but "MiKTeX Configuration Utility" did not succeed for the following reason:
Script configuration file not found.
The log file hopefully contains the information to get MiKTeX going again:
C:\ProgramData\MiKTeX\2.9\miktex\log\initexmf_admin.log
The system cannot find the path specified.
Command exited with code 1
due to the error code the process terminates and I cannot type the C:\ProgramData\MiKTeX\2.9\miktex\log\initexmf_admin.log anymore, so a bit hard to debug ...
questions:
How to continue after an error
How to stop after the outputting the file (exit 1 ?)
To run a script on failure use on_failure section, for example to push initexmf_admin.log to artifacts:
on_failure:
- appveyor PushArtifact C:\ProgramData\MiKTeX\2.9\miktex\log\initexmf_admin.log

Failed do_rootfs for agl-demo-platform

I am building Yocto for AGL image (for more details: automotivelinux.org).
The below error occurred during the build progress (do_rootfs).
In packagegroup-agl-demo-platform.bb, declared packagegroup-agl-image-ivi as a runtime dependent package.
RDEPENDS_${PN} += "\
packagegroup-agl-image-ivi \
"
I can build successfully the packagegroup-agl-image-ivi separately. But when building the whole agl-demo-platform image, happened as follows:
ERROR: agl-demo-platform-1.0-r0 do_rootfs: Unable to install packages. Command '/LTSI4.9/LTSI4.4/build/tmp/work/m3ulcb-agl-linux/agl-demo-platform/1.0-r0/opkg.conf -t /LTSI4.9/build/tmp/work/m3ulcb-agl-linux/agl-demo-platform/1.0-r0/temp/ipktemp/ -o /LTSI4.9/build/tmp/work/m3ulcb-agl-linux/agl-demo-platform/1.0-r0/rootfs --force_postinstall --prefer-arch-to-version install
run-postinsts
screen
kernel-modules
packagegroup-agl-devel
packagegroup-core-eclipse-debug
mc packagegroup-core-tools-profile
kernel-module-vsp2 kernel-module-pvrsrvkm
kernel-module-vspm-if
opkg packagegroup-core-tools-debug
psplash kernel-module-vspm
packagegroup-core-ssh-openssh
packagegroup-agl-demo-platform
omx-user-module kernel-devicetree'
returned 1:
Solver encountered 1 problem(s):
Problem 1/1:
- package packagegroup-agl-demo-platform-1.0-r0.all requires packagegroup-agl-image-ivi, but none of the providers can be installed
Solution 1:
- do not ask to install a package providing packagegroup-agl-demo-platform
ERROR: agl-demo-platform-1.0-r0 do_rootfs: Function failed: do_rootfs
ERROR: Logfile of failure stored in: /LTSI4.9/build/tmp/work/m3ulcb-agl-linux/agl-demo-platform/1.0-r0/temp/log.do_rootfs.14498
ERROR: Task (/LTSI4.9/meta-agl-demo/recipes-platform/images/agl-demo-platform.bb:do_rootfs) failed with exit code '1'
Can anyone help me out in this case ?
I tried 02 ways as follows. They did work.
First method, I cleaned all relative packages and rebuild the whole image.
$ bitbake -c cleanall -c cleansstate <recipes>
recipes consisted of all dependent & runtime dependent packages. But it was a little bit confused to inexperienced users to determine which ones.
Second method, I wiped out the build/tmp/, cache/, sstate-cache/ folders, and re-build all Yocto packages.
There were nothing happening any more. It was really a bad idea if be in critical period of time, but if have free time, be helpful.

Powershell command is not recognized in gitlab.ci.yml

I am trying to execute a PowerShell script from gitlab.ci.yml file, but the PowerShell command is not recognized. The PowerShell which I am trying to trigger just has a single line to print some message.
Running with gitlab-runner 11.5.0 (3afdaba6)
on docker-auto-scale fa6cab46
Using Docker executor with image ruby:2.5 ...
Pulling docker image ruby:2.5 ...
Using docker image sha356:7834f561ba80e65515163209a3f952fcd1d11f9ce4420ba63d952e5b52b77e1 for ruby:2.5 ...
Running on runner-fa6cab46-project-9772339-concurrent-0 via runner-srm-1545289526-a4a36a59...
Cloning repository...
Cloning into 'builds/leo.danny/jenkinsintegration'...
Checking out 4b9beba2 as master...
Skipping Git submodules setup
$ powershell "./JenkinsPowershell.ps1"
/bin/bash: line 70: powershell: command not found
ERROR: Job failed: exit code 1