I have a concourse server that is stuck on the preparing build stage:
screencap of hangup
This issue only started happening after I specified a paths list in my git-resource source config:
1 # Dockerfile source
2 - name: test-git
3 type: git
4 source:
5 uri: ((git-uri))
6 branch: main
7 paths:
8 - Dockerfile
Dockerfile was in the top directory. I also tried moving Dockerfile to another folder:
docker-file. Then I tried using a glob: docker-file/* and docker-file/**, but neither worked.
ref:
https://github.com/concourse/git-resource
Wondering if there are any suggestions on causes
You have a job with a test-git input unrestricted by paths - job is starting fine.
You restrict the input only by selected paths, in this case Dockerfile - Concourse is now awaiting changes only to that path, and it doesn't take into account the original Dockerfile. Hence the hangup.
Check in a comment or some whitespace to the Dockerfile - the job should start again.
I agree that this behavior, combined with message "latest version of resource not found" is perplexing.
Related
I am trying to set my working directory with an environment variable so that I can handle 2 different situations, however, if I use context with the working-directory it does not find files in the directory, but if I hardcode the path, it does. I have tried many different syntax iterations but will paste one iteration below so it is easier to see what I am trying to accomplish.
- uses: actions/checkout#v2
. . .
- name: Monorepo - Set working Directory
if: env.WASMCLOUD_REPO_STYLE == 'MONO' # Run if monorepo
run: |
echo "WORKING_DIR = ${GITHUB_WORKSPACE}/actors/${{ env.ACTOR_NAME }}" >> $GITHUB_ENV
- name: Multirepo - Set working Directory
if: env.WASMCLOUD_REPO_STYLE == 'MULTI' # Run if multirepo
run: |
echo "WORKING_DIR = ${GITHUB_WORKSPACE}" >> $GITHUB_ENV
- name: Build wasmcloud actor
run: make
working-directory: ${{ env.WORKING_DIR }} # If I Hardcode path here it works
The environment variable is showing the correct path during debugging which is formatted as: /home/runner/work/REPO_NAME/REPO_NAME/actors/ACTOR_NAME
For step 3 if I type working-directory: actors/ACTOR_NAME it works (until a later issue :P where it again does not find a directory)
This is my first few days with GitHub Actions so help would be really appreciated. I am not sure why context is not being accepted but a static path is. Examples often show working-directory using context.
See #Benjamin W.'s comments above for the answer.
I have written a bash script it process some data and puts in one file. My intention is to give slack alert if there is content in that file if not it should not give the alert. Is there a way to do it? In Concourse
You should take advantage of the Concourse community's open source resource types. There's a list here. There is a slack resource listed on that page, but I use the one here (not included in the list above because it has not been added by the authors) https://github.com/cloudfoundry-community/slack-notification-resource.
That will give you the ability to add a put step in your job plan to send a slack resource. As for the logic of your original ask, you can use try and on_success. Your task might look something like this:
- try:
task: do-a-thing
config:
platform: linux
image_resource:
type: registry-image
source:
repository: YOUR_TASK_IMAGE
tag: latest
inputs:
- name: some-input
params:
FILE: some-file
run:
path: /bin/sh
args:
- -ec
- |
[ ! -z `cat some-input/${FILE}` ]
on_success:
put: slack
params:
<your slack resource put params go here>
The on_success part will run if the code defined in the task's run section returns 0. The script listed there just checks to see if there are more than zero bytes in the file. Because the task is wrapped in a try step, regardless of whether or not the task succeeds (and hence, sends you a message), the step will succeed and move to the next step in the plan.
I implemented a bunch of infrastructure checks (PowerShell scripts) that need to be ran on Window Servers (most of them use Get-WmiObject cmdlet). I put them along with their Pester tests on GitLab and trying to build a pipeline.
I have read creating-your-first-windows-container-with-docker-for-windows and building-a-simple-release-pipeline-in-powershell-using-psake-pester-and-psdeploy but I am very confused. My understanding is that to have the code run on GitLab CI, I will need to build a Windows Server docker image?
the following is my .gitlab-ci.yml file but it has authentication errors, the image can be found here:
image: ltsc2019
stages:
- build
- test
- deploy
build:
stage: build
script:
# run PowerShell script
- powershell -File "\Deploy\Build.ps1"
test:
stage: test
script:
- powershell -File "\Deploy\CodeCoverage.ps1"
deploy:
stage: deploy
script:
- powershell -File "\Deploy\Deploy_Local.ps1"
It wouldn't pass the initial build and here are the error I got:
# Error 1
ERROR: Job failed: Error response from daemon: pull access denied for ltsc2019, repository does not exist or may require 'docker login' (executor_docker.go:168:3s)
# Error 2 (this happened because I added 'shell: "powershell"'
# after executor in the gitlab-runner congif file)
ERROR: Preparation failed: Docker doesn't support shells that require script file
ltsc2019 is one tag of the mcr.microsoft.com/windows/servercore.
You need to refer this image at the beginning of your .gitlab-ci.yml :
image: mcr.microsoft.com/windows/servercore:ltsc2019
Anyone who struggles to get docker images working on your Docker for Windows, Please read Docker executor currently doesn't support Docker for Windows. Please check out executor if you are building a pipeline that needs a container to run it
I wan't to create ZIP Files on AppVeyor to publish it on GitHub as an Release.
Currently, the Build-Process make following Steps:
Install Node.js v7
Start the .\Build-All.bat
The Bild.bat has following Steps:
Create Temp and Build Directory
Move Source to Temp
Install depencies with npm install
Start electron-packager to create binary files (See directory structure of /Build/ directory)
Directory Structure:
/Source/
/Build/
L /DSTEd-darwin-x64/
L /DSTEd-linux-armv7l/
L /DSTEd-linux-ia32/
L /DSTEd-linux-x64/
L /DSTEd-mas-x64/
L /DSTEd-win32-ia32/
L /DSTEd-win32-x64/
/Temp/
/Build.bat
Here is that, what i want:
Package each Build-Directory (for sample /Build/DSTEd-win32-x64/) to an ZIP-Archive like /Build/DSTEd-win32-x64.zip
Add all ZIP-Archives (/Build/DSTEd-*-*.zip) to the release
I had created manually a Release on GitHub for sample; That is, what i want:
https://github.com/DST-Tools/DSTEd/releases/tag/1.0.0
Here is my appveyor.yml:
version: 1.0.0-{build}
# Set the Node Version
environment:
matrix:
- nodejs_version: "7"
# Install scripts. (runs after repo cloning)
install:
- ps: Install-Product node $env:nodejs_version
- npm -g install electron-packager
- .\Build-All.bat
# Caching
cache:
- node_modules
# Deployment Options
deploy:
tag: $(appveyor_build_version)
release: 'DSTEd v${appveyor_build_version} - Pre-Release (Preview)'
description: ' ![Preview](https://github.com/DST-Tools/DSTEd/raw/master/Screenshots/preview.png) ## Pre-Release v1.0.0 (Preview) Builded binarys for `Windows` (`32bit` & `64bit`), `Linux` (`32bit`, `64bit` & `armv7`) and `Mac OS X` (`darwin` & `mas`, only `64bit`).'
provider: GitHub
auth_token:
secure: b202f536350628ff69af69d08daee9f76a9cff20
artifact: '**\*.zip'
draft: false
prerelease: true
on:
branch: master
appveyor_repo_tag: true
matrix:
fast_finish: true
build: OFF
test: OFF
Missed part is artifact packaging. You can list all those folders are artifacts and Appveyor will zip them for you. After that deployment will "see" them.
Side note: you might want to remove on/branch:master part because in most cases tag name replaces branch name in incoming webhook. More details are here. In general I would recommend to start with simplest possible deployment configuration and add settings one by one after basic one works.
The packaging artifacts is very complex. By the docs, you can define filters there won't work correctly.
I've implement a own solution to trigger the before_deploy. Before the deployment-phase starts, a Script package the Files as ZIP and add these as an Artifact:
# Deployment Options
before_deploy:
- node .\Tools\PackageBuild.js
- ps: Get-ChildItem .\Build\*.zip | % { Push-AppveyorArtifact $_.FullName -FileName $_.Name }
On the Deployment-Process we add all available artifacts to leave the property blank:
deploy:
[...]
artifact: #leave blank
What's the best way to pass parameters between concourse tasks and jobs? For example; if my first task generates a unique ID, what would be the best way to pass that ID to the next job or task?
If you are just passing between tasks within the same job, you can use artifacts (https://concourse-ci.org/running-tasks.html#outputs) and if you are passing between jobs, you can use resources (like putting it in git or s3). For example, if you are passing between tasks, you can have a task file
---
platform: linux
image_resource: # ...
outputs:
- name: unique-id
run:
path: project-src/ci/fill-in-output.sh
And the script fill-in-output.sh will put the file that contains the unique ID into path unique-id/. With that, you can have another task that takes the unique-id output as an input (https://concourse-ci.org/running-tasks.html#inputs) and use that unique id file.
Additionally to tasks resources will place files automagically for you in their working directory.
For example I have a pipeline job as follows
jobs:
- name: build
plan:
- get: git-some-repo
- put: push-some-image
params:
build: git-some-repo/the-image
- task: Use-the-image-details
config:
platform: linux
image_resource:
type: docker-image
source:
repository: alpine
inputs:
- name: push-some-image
run:
path: sh
args:
- -exc
- |
ls -lrt push-some-image
cat push-some-image/repository
cat push-some-image/digest
Well see the details of the image push from push-some-image
+ cat push-some-image/repository
xxxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/path/image
+ cat push-some-image/digest
sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Passing data within a job's tasks could easily be done with input/output artifacts (files), As Clara Fu noted.
For the case between jobs, when simple e.g. 'string' data has to be passed , and using a git is an overkill, the 'keyval' resource[1] seems to be a good solution.
The readme describes that the data is stored and managed as a standard properties file.
https://github.com/SWCE/keyval-resource