Scenario:
We have a pipeline named example_pipeline
example_pipeline has a step named step_1 (not part of any affinity group)
example_pipeline has 2 steps step_2 & step_3 which are under an affinity group named example_affinity_group
step_3 is dependent on step_1 using inputSteps: [] in Pipelines YML
Now step_2 is also waiting for step1 to finish when it has no dependency. What is the reason for this?
Pipelines YML
pipelines:
- name: example_pipeline
steps:
- name: step_1
type: Bash
execution:
onExecute:
- echo "1"
- name: step_2
type: Bash
configuration:
affinityGroup: example_affinity_group
execution:
onExecute:
- echo "2"
- name: step_3
type: Bash
configuration:
affinityGroup: example_affinity_group
inputSteps:
- name: step_1
- name: step_2
execution:
onExecute:
- echo "3"
Pipelines Graph View
Since the entire affinityGroup runs on a same node and if there are steps in the affinityGroup which are dependent on steps outside of the affinityGroup, the entire affinityGroup has to wait as we cannot queue only steps of the affinityGroup now and some steps later.
This was done so that build nodes should not be sitting idle and waiting for the input steps to finish.
Its because to optimise the build node execution.
The steps present in the affinity group waits until the inputStep's affinityGroup execution is complete.
Related
I have configured 2 jobs that I want to run. The first job does the actual building of my software on multiple configurations. On my second job I want to wait until the first set of jobs complete, and then post to a slack channel the summary of my job.
I found a method by using needs to accomplish this task however I am stuck when it comes to actually obtaining the status of each individual task and not just a global status.
For example I have something as follows
name: Build and test
jobs:
build-and-test:
strategy:
matrix:
config:
- name: 'Ubuntu 18.04'
runner: 'ubuntu-18.04'
id: 'u18'
- name: 'Ubuntu 20.04'
runner: 'ubuntu-20.04'
id: 'u20'
fail-fast: false
runs-on: ${{ matrix.config.runner }}
steps:
- name: Step 1
id: step1
run: |
echo "To be filled in"
- name: Step 2
id: step2
run: |
echo "To be filled in"
webhook-update:
needs: build-and-test
if: always()
runs-on: [ubuntu-20.04]
steps:
- name: Send webhook update for all jobs
run: |
# In the future will push to slash, but for now using echo
echo ${{ needs.build-and-test.results }}
# Ideally I want to also have access to something like u18.results and u20.results
Wanted to know if there is any flag/option for concourse tasks inside a single job so that all tasks gets executed regardless of any task failing.
Thanks!
Totally. By default, tasks run sequentially. If you want them to run independently of the sequence place them in the in_parallel key, like in the following pipeline:
jobs:
- name: parallel-tasks
plan:
- in_parallel:
- task: failing-task
config:
platform: linux
image_resource:
type: docker-image
source:
repository: alpine
run:
path: /bin/sh
args: [ "-c", "exit 1"]
- task: passing-task
config:
platform: linux
image_resource:
type: docker-image
source:
repository: alpine
run:
path: /bin/sh
args: [ "-c", "exit 0"]
Running it will produce the following output:
in_parallel works with tasks as well as resources (e.g. running get in parallel)
I am playing around with Azure DevOps container jobs and service containers. My use case is as follows, I (unfortunately) have to do everything on Private Hosted Build agents.
I am running my job as a container job in Container A.
I have specific software installed (Fortify), which uses commandline, on Container B
Basically I want one of the steps running on container A to be run in Container B (to do the fortify scan, using the code from the workspace). Of course I could do it in a separate job, but I'd prefer to do it in the same job.
Any ideas if this is possible at the moment?
Thanks
Cool, I just read that this feature will be available in the sprint 163 release!
https://learn.microsoft.com/en-us/azure/devops/release-notes/2020/sprint-163-update
resources:
containers:
- container: python
image: python:3.8
- container: node
image: node:13.2
jobs:
- job: example
container: python
steps:
- script: echo Running in the job container
- script: echo Running on the host
target: host
- script: echo Running in another container, in restricted commands mode
target:
container: node
commands: restricted
You can use the Step target to choose which container or host the step will running at.
For example:
resources:
containers:
- container: pycontainer
image: python:3.8
steps:
- task: SampleTask#1
target: host
- task: AnotherTask#1
target: pycontainer
I'm trying out Argo workflow and would like to understand how to freeze a step. Let's say that I have 3 step workflow and a workflow failed at step 2. So I'd like to resubmit the workflow from step 2 using successful step 1's artifact. How can I achieve this? I couldn't find the guidance anywhere on the document.
I think you should consider using Conditions and Artifact passing in your steps.
Conditionals provide a way to affect the control flow of a
workflow at runtime, depending on parameters. In this example
the 'print-hello' template may or may not be executed depending
on the input parameter, 'should-print'. When submitted with
$ argo submit examples/conditionals.yaml
the step will be skipped since 'should-print' will evaluate false.
When submitted with:
$ argo submit examples/conditionals.yaml -p should-print=true
the step will be executed since 'should-print' will evaluate true.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: conditional-
spec:
entrypoint: conditional-example
arguments:
parameters:
- name: should-print
value: "false"
templates:
- name: conditional-example
inputs:
parameters:
- name: should-print
steps:
- - name: print-hello
template: whalesay
when: "{{inputs.parameters.should-print}} == true"
- name: whalesay
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["cowsay hello"]
If you use conditions in each step you will be able to start from a step you like with appropriate condition.
Also have a loot at this article Argo: Workflow Engine for Kubernetes as author explains the use of conditions on coinflip example.
You can see many examples on their GitHub page.
In Concourse CI, by default, the underlying container for a job's task is instantiated and run with user root.
If the container used for my task needs to be executed with a different user (e.g. postgres), how can I do that in Concourse?
Concourse tasks provide a user parameter to explicitly set the user to run its container as.
See http://concourse-ci.org/running-tasks.html#task-run-user .
Here is a sample Concourse pipeline to demonstrate the use of that parameter:
---
jobs:
- name: check-container-user
plan:
- do:
- task: container-user-postgres
config:
platform: linux
image_resource:
type: docker-image
source:
repository: postgres
tag: "latest"
run:
user: postgres
path: sh
args:
- -exc
- |
whoami
echo "Container running with postgres user"