I'm following Stark & Wayne tutorial and got into a problem:
Pipeline fails with
hijack: Backend error: Exit status: 500, message {"Type":"","Message": "
runc exec: exit status 1: exec failed: container_linux.go:247:
starting container process caused \"exec format error\"\n","Handle":""}
I have one git resource and one job with one task:
- task: test
file: resource-ci/ci/test.yml
test.yml file:
platform: linux
image_resource:
type: docker-image
source:
repository: busybox
tag: latest
inputs:
- name: resource-tool
run:
path: resource-tool/scripts/deploy.sh
deploy.sh is simple dummy file with one echo command
echo [+] Testing in the process ...
So what could it be?
This error means that the shell it's trying to invoke in your script is unavailable on the container running your task.
Busybox doesn't come with bash, it only comes with /bin/sh, check the shebang in deploy.sh, making sure it looks like:
#!/bin/sh
# rest of script
I also ran into this error when I forgot to add a ! at the top of my pipelines shell script:
#/bin/bash
# rest of script
Related
We are Trying to build our gcp instance templated using GitHub Actions.
Where we try to build our java archives and transfer it to GCP instance from GitHub Ubuntu machine.
We have set sshkey to access the GCP instance from Ubuntu machines using
ssh-keygen -t rsa -f ~/.ssh/temp -C root -q -N "" && chmod 400 ~/.ssh/temp && chmod 400 ~/.ssh/temp.pub && echo root:cat ~/.ssh/temp.pub > ~/.ssh/temp-formated.pub && chmod 700 /home/runner/.ssh/temp-formated.pub
We get error response when we try to run the following command
scp -o StrictHostKeyChecking=no -i /home/runner/.ssh/temp ./code-web/target/code.war root#:/opt/code.war
The script worked fine till 5th Dec 2022 and started giving error from 6th Dec 2022.
We used to face some failures but the same worked fine when we re-run the build.
build.yml
# This is a basic workflow to help you get started with Actions
name: build-web
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the develop branch
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
inputs:
short_sha:
description: 'Git sha on which build will be created'
required: true
default: ''
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
with:
ref: ${{ github.event.inputs.short_sha }}
# Build using mvn
- name: Set up JDK 8
uses: actions/setup-java#v2
with:
java-version: '8'
distribution: 'adopt'
cache: 'maven'
- name: Build with Maven
run: mvn --batch-mode --update-snapshots verify
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud#v0
with:
service_account_key: ${{ secrets.GCP_SA_KEY }}
export_default_credentials: true
- name: Gcloud Version
run: gcloud --version
- name: Run build script
run: python ./.github/workflows/build.py ${{ github.event.inputs.short_sha }}
Following is the error log.
We have tried multiple builds for other builds in the other builds in the same repository- those failed too
We have confirmed that the secret is still active.
And the build also is successful hence the file "code.war" exists
Any idea of how to figure out the root cause or any one facing similar issue
###Running: ssh-keygen -t rsa -f ~/.ssh/temp -C root -q -N "" && chmod 400 ~/.ssh/temp && chmod 400 ~/.ssh/temp.pub && echo root:`cat ~/.ssh/temp.pub` > ~/.ssh/temp-formated.pub && chmod 700 /home/runner/.ssh/temp-formated.pub
###Exit Code: 0
###RESPONSE:(b'', b'')
####################################
#########Transfer public key to instance############
###Running: cd ~/ && pwd
###Exit Code: 0
###RESPONSE:(b'/home/runner\n', b'')
###Running: gcloud compute instances add-metadata dummy-temp-web --project=projectname --zone=us-east1-b --metadata-from-file ssh-keys=/home/runner/.ssh/temp-formated.pub
###Exit Code: 0
###RESPONSE:(b'', b'Updated [https://www.googleapis.com/compute/v1/projects/projectname/zones/us-east1-b/instances/dummy-temp-web].\n')
####################################
#Give time for key to propogate
#########copy to remote############
###Running: scp -o StrictHostKeyChecking=no -i /home/runner/.ssh/temp ./code-web/target/code.war root#<ip>:/opt/code.war
###Exit Code: 1
###RESPONSE:(b'', b"Warning: Permanently added '<ip>' (ECDSA) to the list of known hosts.\r\nPermission denied, please try again.\r\nPermission denied, please try again.\r\nroot#<ip>: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\nlost connection\n")
Traceback (most recent call last):
File "/home/runner/work/code/code/./.github/workflows/gcloudBuild.py", line 100, in <module>
execute(f'***copyBuldFileToRemoteCMD***', False)
File "/home/runner/work/code/code/./.github/workflows/gcloudBuild.py", line [30](https://github.com/company/code/actions/runs/3628509641/jobs/6119611343#step:7:31), in execute
raise Exception(f'Sorry, bad exit code***process.returncode***')
Exception: Sorry, bad exit code1
I had also similar issue when I was using ubuntu-latest as job runner in yml file.
Instead of ubuntu-latest I used ubuntu-20.04 then issue resolved for me.
you can try this in your yml file
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-20.04
It is working for me.
I have a Github repo that contains some Linux utilites.
To run test I use GitHub actions and some simple custom workflow that runs on a remote Linux server using self-hosted runner:
name: CMake
on: [push]
env:
BUILD_TYPE: Release
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout#v2
- name: Build
working-directory: ${{runner.workspace}}/Utils
shell: bash
run: cmake --build . --config $BUILD_TYPE
- name: Run test
working-directory: ${{runner.workspace}}/Utils
shell: bash
run: ./testing/test.sh
The workflow is very simple - I compile the source using cmake and then run a script from the same repo to test the build. The test script is a bash script and looks as following:
#!/bin/bash
export LD_LIBRARY_PATH=/path/to/bin
cd /path/to/bin
echo -e "Starting the tests"
./run_server &
./run_tests
if [ $? -eq 0 ]; then
echo "successful"
else
echo "failed"
exit 1
fi
Here the scripts starts up an compiled application from my code (run_server) and then run the testing utility that communicates with it and prints a result.
Actually I use C printf() inside run_tests to print the output strings. If I run that on a local machine I get output like the following:
Starting the tests
test1 executed Ok
test2 executed Ok
test3 executed Ok
successful
Each test takes about 1 sec. so I the application prints a line like testX executed Ok one per second.
But if it runs using Github actions the output looks a bit different (copied from the Github actions console):
./testing/test.sh
shell: /bin/bash --noprofile --norc -e -o pipefail {0}
env:
BUILD_TYPE: Release
Starting the tests
successful
And even in this case the output from the bash script printed out only after the script has finished.
So I have 2 problems:
no printf() output from the test application
the script output (printed using echo) come only after the script has finished
I expected the same behavior from the Github action as I get on a local machine i.e. I see a lines printed ~ 1/sec. immediately.
It looks like Github action buffers all the output until a step executed and ignores all the print from the application that run inside the bash script.
Is there a way to get all the output in real time while a step executes?
I have a Concourse job that pulls a repo into a docker image and then executes a command on it, now I need to execute a script that comes form the docker image and after it is done execute a command inside the repo, something like this:
run:
dir: my-repo-resource
path: /get-git-context.sh && ./gradlew
args:
- build
get-git-context.sh is the script coming from my docker image and .gradlew is the standard gradlew inside my repo with the build param, I am getting the following error with this approach:
./gradlew: no such file or directory
Meaning the job cd'd into / when executing the first command, executing only one command works just fine.
I've also tried adding two run sections:
run:
path: /get-git-context.sh
run:
dir: my-repo-resource
path: ./gradlew
args:
- build
But only the second part is executed, what is the correct way to concat these two commands?
We usually solve this by wrapping the logic in a shell script and setting the path: /bin/bash with corresponding args (path to the script).
run:
path: /bin/sh
args:
- my-repo_resource/some-ci-folder/build_script.sh
The other option would be to define two tasks and pass the resources through the job's workspace, but we usually do more steps than just two and this would result in complex pipelines:
plan:
- task: task1
config:
...
outputs:
- name: taskOutput
run:
path: /get-git-context.sh
- task: task2
config:
inputs:
## directory defined in task1
- name: taskOutput
run:
path: ./gradlew
args:
- build
I implemented a bunch of infrastructure checks (PowerShell scripts) that need to be ran on Window Servers (most of them use Get-WmiObject cmdlet). I put them along with their Pester tests on GitLab and trying to build a pipeline.
I have read creating-your-first-windows-container-with-docker-for-windows and building-a-simple-release-pipeline-in-powershell-using-psake-pester-and-psdeploy but I am very confused. My understanding is that to have the code run on GitLab CI, I will need to build a Windows Server docker image?
the following is my .gitlab-ci.yml file but it has authentication errors, the image can be found here:
image: ltsc2019
stages:
- build
- test
- deploy
build:
stage: build
script:
# run PowerShell script
- powershell -File "\Deploy\Build.ps1"
test:
stage: test
script:
- powershell -File "\Deploy\CodeCoverage.ps1"
deploy:
stage: deploy
script:
- powershell -File "\Deploy\Deploy_Local.ps1"
It wouldn't pass the initial build and here are the error I got:
# Error 1
ERROR: Job failed: Error response from daemon: pull access denied for ltsc2019, repository does not exist or may require 'docker login' (executor_docker.go:168:3s)
# Error 2 (this happened because I added 'shell: "powershell"'
# after executor in the gitlab-runner congif file)
ERROR: Preparation failed: Docker doesn't support shells that require script file
ltsc2019 is one tag of the mcr.microsoft.com/windows/servercore.
You need to refer this image at the beginning of your .gitlab-ci.yml :
image: mcr.microsoft.com/windows/servercore:ltsc2019
Anyone who struggles to get docker images working on your Docker for Windows, Please read Docker executor currently doesn't support Docker for Windows. Please check out executor if you are building a pipeline that needs a container to run it
I am attempting to trigger a concourse job from the command line. My pipeline has one resource (a git repo) and one job, which uses that repo. I am seeing:
$ fly -t tutorial trigger-job -j my-pipeline/my-job -w
error: resource not found
However, when I go the web UI and manually trigger the job by pressing the "+" button in the top right, it works fine.
Here is the full pipeline:
resources:
- name: cruise-source
type: git
source:
uri: git#github.com:my-org/cruise.git
branch: develop
jobs:
- name: build-image
public: true
plan:
- get: cruise-source
- task: list-files
config:
platform: linux
image_resource:
type: docker-image
source: {repository: alpine}
inputs:
- name: cruise-source
run:
path: ls
args: [cruise-source]
How can I trigger this job from the CLI?
The "resource not found" you get has nothing to do with the git resource :-) it actually means that the pipeline or job name is wrong. Looking at your pipeline configuration, you should issue
fly -t tutorial trigger-job -j my-pipeline/build-image -w
or if your configuration is different from what you have posted, maybe you have a typo in the name of the pipeline or job.